Summary: Only log errors that require intervention, nothing else.

In general that’s reasonable advice and the article makes some good points, which are:

  • logging is not free; it has a non-negligible performance impact
  • there are better tools for most of the problems that people tend to use logs to solve

I would add:

  • logs are a user interface; it is important to keep them minimal so that they stay usable

But some of the details don’t really make sense.

The article suggests using plain println in order to avoid overhead, but in fact access to stdout/stderr is typically what’s most expensive about logging, which actual logging frameworks mitigate by offloading it to a worker thread.

The author recommends not to log progress but to use metrics instead. Surely having metrics is a good idea, but in batch processing, logging progress can make sense because it gives more immediate feedback after the rollout of a new version than metrics collection, which tends to be laggy.

There is also the implied assumption that you have a whole host of infrastructure at your fingertips that you can make use of to replace your logging, such as trace collection, metrics collection, and so on. That may be true in a Cloud environment, but in other environments such things may be more expensive to maintain.

Overall I agree with the notion that you should err on the side of logging less rather than more. But if you do have something to log, then (1) do it freely and (2) use a proper logging framework.