An interpretation of quantum mechanics that—as far as I understand—is an extension of the Copenhagen interpretation, with the main difference being that it rejects the notion of a single physical reality that is independent of the observer. Each observer, it seems to go, experiences its own Everett branch. This resolves the measurement problem.
The main difference between it and Many Worlds seems to be that Everett branches are indexed by observers rather than possibilities. In other words, observers do not split. From this it seems to me that it follows that Everett branches cannot diverge under Relational Quantum Mechanics; as soon as two observers interact with each other, their realities must synchronize, which puts constraints on the set of possible histories.
To put it in terms of Schrödinger’s Cat: If the cat survives according to its own experience, then the physicist who opens its box must find it alive. This is not true under Many Worlds, where there will be copies of the physicist who find it alive and those who find it dead.
Personally speaking, as far as Copenhagen-based interpretations go, this one seems at least not quite so insane.
I still find Many Worlds (plus decoherence) more intuitive, though.
A group of physicists, founded in 1975, that primarily discussed quantum mysticism but was apparently still pretty influential.
Interval tree clocks for Haskell.
An interval tree clock is like a vector clock except it can shrink as well as grow.
The top 3 are, as you may have expected, Chez Scheme, Gambit-C, and Racket.
A fast R⁶RS Scheme compiler. Was proprietary for a long time, but is Free Software nowadays. (GitHub: cisco/chezscheme.)
I don’t find this particularly surprising.
An annotation processor that implements Jakarta JSON-B via code generation.
Subqueries, lateral joins, and arrays. HQL is growing into a rather powerful query language.
A Java source code transformation engine, usable for refactoring, API migration, and such.
Matthew Yglesias on the housing crisis.
I wasn’t aware that zoning rules are as strict as they are. It sounds a bit insane. I wonder what it’s like in Germany—probably no better if I were to guess.
But I suppose that in addition to a NIMBY vs. YIMBY question it is also an instance of the principle of trying to protect people by taking choices away from them. I wonder how often that works.
A test suite generator for Java. Attempts to automatically generate JUnit test suites that target a given coverage criterion by searching the space of unit tests, encoding in them the current behavior of the code.
An implementation of Jakarta JSON Binding (JSON-B).
An implementation of Jakarta JSON Processing (JSON-P).
Keymaps for keyboards with programmable firmware.
The only one for NEO (my preferred layout) appears to be for the Kyria, but I’m sure it can serve as good inspiration for other keyboards.
A series of audio episodes by ZSA comparing different kinds of mechanical key switches.
A promotional video by KeebMaker that outlines the ways an ergonomic keyboard is better than a conventional one. 5 minutes.
Reads JFR recordings from remote or local Java virtual machines. A programmatic interface to what you can do with jcmd
.
If I understand correctly this is not based on and unrelated to JFR Event Streaming.
Summary: Only log errors that require intervention, nothing else.
In general that’s reasonable advice and the article makes some good points, which are:
- logging is not free; it has a non-negligible performance impact
- there are better tools for most of the problems that people tend to use logs to solve
I would add:
- logs are a user interface; it is important to keep them minimal so that they stay usable
But some of the details don’t really make sense.
The article suggests using plain println
in order to avoid overhead, but in fact access to stdout
/stderr
is typically what’s most expensive about logging, which actual logging frameworks mitigate by offloading it to a worker thread.
The author recommends not to log progress but to use metrics instead. Surely having metrics is a good idea, but in batch processing, logging progress can make sense because it gives more immediate feedback after the rollout of a new version than metrics collection, which tends to be laggy.
There is also the implied assumption that you have a whole host of infrastructure at your fingertips that you can make use of to replace your logging, such as trace collection, metrics collection, and so on. That may be true in a Cloud environment, but in other environments such things may be more expensive to maintain.
Overall I agree with the notion that you should err on the side of logging less rather than more. But if you do have something to log, then (1) do it freely and (2) use a proper logging framework.