add gRPC UnaryServerInterceptor for kubeapps-apis #3780
Conversation
|
A sample log file here |
| res, err := handler(ctx, req) | ||
| log.Infof("LogRequest - %s %v %s\n", | ||
| time.Since(start), | ||
| status.Code(err), |
There was a problem hiding this comment.
Just wondering: are we sure err will always exist? I mean, what if it is nil ?
Also: should we move it to another file, or we better do that once we have more interceptors configured?
There was a problem hiding this comment.
status.Code(nil) is OK: https://play.golang.org/p/6JbcUA1jYAX
There was a problem hiding this comment.
Thanks Satya. The formatting there for this request logging is perfect. I don't think we should use fmt.Printf for a number of reasons, but I think we can easily use the standard log package. There's a nice answer on stackoverflow that outlines the differences between fmt.Printf and log.Printf , but the main two here are, IMO:
- logs generally go to stderr. Yes, we could still use the
fmtpackage to do the same, but it wouldn't help with: - loggers guarantee that output won't be interleaved, where as fmt does not.
For this reason, I'd recommend doing something very similar to what you've done here, but using the standard log package. Here's an example that I just did to demo: https://go.dev/play/p/eAGBNYQE-59
| // Create the grpc server and register the reflection server (for now, useful for discovery | ||
| // using grpcurl) or similar. | ||
| grpcSrv := grpc.NewServer() | ||
| grpcSrv := grpc.NewServer(grpc.ChainUnaryInterceptor(LogRequest)) |
There was a problem hiding this comment.
Although not now, what if we were to add more interceptors? Can we add as many of them as desired (below)? Or do we have to use the ChainUnaryServer ? (just asking, I don't know)
grpc.ChainUnaryInterceptor(LogRequest, myInterceptor1, foo, bar)| // LogRequest is a gRPC UnaryServerInterceptor that will log the API call | ||
| func LogRequest(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (response interface{}, err error) { | ||
| start := time.Now() | ||
| res, err := handler(ctx, req) |
There was a problem hiding this comment.
What I find useful here is having a way to also log the request params. I mean, don't like handling req params in each operation handler like:
+core GetAvailablePackageSummaries (cluster="kubeapps", namespace="kubeapps")
What if we add a more verbose tace (like V4) to also log the request params? WDYT?
There was a problem hiding this comment.
Yeah, it'd be nice but I think that's difficult because the request message is different for each RPC call, so we can't just pull out the cluster and namespace, for example. At least, I don't see a way to do so without something very ugly like trying to cast the interface{} to each type of request message. But maybe you've thought of another way Antonio? Hmm, perhaps we could define an interface for a func RequestSummary() that we ensure for each request message and just cast to that... but either way, I think that should be out of scope for this PR (but worth investigating).
| // LogRequest is a gRPC UnaryServerInterceptor that will log the API call | ||
| func LogRequest(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (response interface{}, err error) { | ||
| start := time.Now() | ||
| res, err := handler(ctx, req) |
There was a problem hiding this comment.
Yeah, it'd be nice but I think that's difficult because the request message is different for each RPC call, so we can't just pull out the cluster and namespace, for example. At least, I don't see a way to do so without something very ugly like trying to cast the interface{} to each type of request message. But maybe you've thought of another way Antonio? Hmm, perhaps we could define an interface for a func RequestSummary() that we ensure for each request message and just cast to that... but either way, I think that should be out of scope for this PR (but worth investigating).
| res, err := handler(ctx, req) | ||
| log.Infof("LogRequest - %s %v %s\n", | ||
| time.Since(start), | ||
| status.Code(err), |
There was a problem hiding this comment.
status.Code(nil) is OK: https://play.golang.org/p/6JbcUA1jYAX
|
|
@absoludity , Please review and help to close it |
absoludity
left a comment
There was a problem hiding this comment.
@absoludity , Please review and help to close it
See inline. The format looks great, we just need to swap out fmt.Printf as detailed below. +1 to land once that's done.
Once this is landed, we should follow up with a PR which allows some config so that the liveness/readiness checks can be only printed when the verbosity is a certain level (V(4) is what Greg used, which makes sense - debug only). But let's discuss that once this lands.
| res, err := handler(ctx, req) | ||
| log.Infof("LogRequest - %s %v %s\n", | ||
| time.Since(start), | ||
| status.Code(err), |
There was a problem hiding this comment.
Thanks Satya. The formatting there for this request logging is perfect. I don't think we should use fmt.Printf for a number of reasons, but I think we can easily use the standard log package. There's a nice answer on stackoverflow that outlines the differences between fmt.Printf and log.Printf , but the main two here are, IMO:
- logs generally go to stderr. Yes, we could still use the
fmtpackage to do the same, but it wouldn't help with: - loggers guarantee that output won't be interleaved, where as fmt does not.
For this reason, I'd recommend doing something very similar to what you've done here, but using the standard log package. Here's an example that I just did to demo: https://go.dev/play/p/eAGBNYQE-59
| // Create the grpc server and register the reflection server (for now, useful for discovery | ||
| // using grpcurl) or similar. | ||
| grpcSrv := grpc.NewServer() | ||
| grpcSrv := grpc.NewServer(grpc.ChainUnaryInterceptor(LogRequest)) |
There was a problem hiding this comment.
Ideally we'd create the logger here (logger := log.New(...)), but to include it in the LogRequest function (without instantiating it every time), we'll probably need to use a closure. The simplest way would be to just create the logger here, then move the definition of your LogRequest function inline here so it can reference the logger. That way the logger is only created once.
That would be fine and do the job, but if you're more familiar with closures, you could instead create a function that takes the created logger as input and returns the request logger function, CreateRequestLogger(log.Logger) func(context.Context, ...) (interface{}, error) { ... }, which is a bit nicer, if you're used to closures in Go. Fine either way.
|
Done. Updated the logger as requested. see commit |
Perfect, thanks Satya. I'm unable to re-trigger CI (since the branch is under your GH account). If you're able to do so by clicking on the "Re-run -> Re-run from failed" on the CI build page, please do so. Otherwise, adding an empty commit and pushing again will retrigger the whole build. |
|
Hi there! We are in the process of standardizing on logging practices and found this patch which contains very interesting discussions :) I have a couple of potentially basic questions though 1 - Why did you discard the use of logrus or of the existing logrus interceptor? Thank you so much, Kubeapps and you all rock! :) |
HTH, |
|
Thanks Satya, that makes sense, in any case my question wasn't "Why klog is better than log" but why do you keep using standard library |
I don't remember all the context, but I think it was because klog assumes that you are always calling it from an actual call-site, including the file/line (with the default settings). So using it in an interceptor, where you are adding a generic log statement for many call-sites, results in every one being logged with useless info of the interceptor call-site. Not sure if that was the whole reason. |
|
That makes sense, thanks for the context, I'll give it a try on our end since we might be actually running into what you just mentioned and didn't notice :) |
Description of the change
Use an interceptor to automatically log every RPC call to the kubeapps-apis server.
Benefits
intercepting all RPC calls in one location.
Possible drawbacks
The kubectl logs for pod/kubeapps-internal-kubeappsapis will be flooded with more events.
Applicable issues
This tries to address #3385
Additional information
N/A