Benki → All Posts

⇠ previous page next page ⇢

An interpretation of quantum mechanics that—as far as I understand—is an extension of the Copenhagen interpretation, with the main difference being that it rejects the notion of a single physical reality that is independent of the observer. Each observer, it seems to go, experiences its own Everett branch. This resolves the measurement problem.

The main difference between it and Many Worlds seems to be that Everett branches are indexed by observers rather than possibilities. In other words, observers do not split. From this it seems to me that it follows that Everett branches cannot diverge under Relational Quantum Mechanics; as soon as two observers interact with each other, their realities must synchronize, which puts constraints on the set of possible histories.

To put it in terms of Schrödinger’s Cat: If the cat survives according to its own experience, then the physicist who opens its box must find it alive. This is not true under Many Worlds, where there will be copies of the physicist who find it alive and those who find it dead.

Personally speaking, as far as Copenhagen-based interpretations go, this one seems at least not quite so insane.

I still find Many Worlds (plus decoherence) more intuitive, though.

Interval tree clocks for Haskell.

An interval tree clock is like a vector clock except it can shrink as well as grow.

Matthias #

Simulating bad drive blocks with Device Mapper

Say you have a 0.5 MiB (= 1,024 sectors of 512 bytes each) drive at /dev/loop1 and would like to boot it with QEMU while simulating a broken sector at position 256.

You can use dm-error for this.

Write the following into a file and call it broken-drive.dm:

0 256 linear /dev/loop0 0
256 1 error
257 767 linear /dev/loop0 257

Alternatively, you can make use of dm-flakey to simulate a sector that is only sometimes broken, or that does something even worse such as drop any writes made to it. For example:

0 256 linear /dev/loop0 0
256 1 flakey /dev/loop0 256 5 5
257 767 linear /dev/loop0 257

Refer to the documentation of dm-flakey on how exactly it works and what the parameters are that it expects.

Create a virtual device at /dev/mapper/broken-drive using dmsetup create:

dmsetup create broken-drive <broken-drive.dm

You can now use it with QEMU just like any other drive or drive image.

A Java source code transformation engine, usable for refactoring, API migration, and such.

Matthew Yglesias on the housing crisis.

I wasn’t aware that zoning rules are as strict as they are. It sounds a bit insane. I wonder what it’s like in Germany—probably no better if I were to guess.

But I suppose that in addition to a NIMBY vs. YIMBY question it is also an instance of the principle of trying to protect people by taking choices away from them. I wonder how often that works.

A test suite generator for Java. Attempts to automatically generate JUnit test suites that target a given coverage criterion by searching the space of unit tests, encoding in them the current behavior of the code.

Keymaps for keyboards with programmable firmware.

The only one for NEO (my preferred layout) appears to be for the Kyria, but I’m sure it can serve as good inspiration for other keyboards.

Summary: Only log errors that require intervention, nothing else.

In general that’s reasonable advice and the article makes some good points, which are:

  • logging is not free; it has a non-negligible performance impact
  • there are better tools for most of the problems that people tend to use logs to solve

I would add:

  • logs are a user interface; it is important to keep them minimal so that they stay usable

But some of the details don’t really make sense.

The article suggests using plain println in order to avoid overhead, but in fact access to stdout/stderr is typically what’s most expensive about logging, which actual logging frameworks mitigate by offloading it to a worker thread.

The author recommends not to log progress but to use metrics instead. Surely having metrics is a good idea, but in batch processing, logging progress can make sense because it gives more immediate feedback after the rollout of a new version than metrics collection, which tends to be laggy.

There is also the implied assumption that you have a whole host of infrastructure at your fingertips that you can make use of to replace your logging, such as trace collection, metrics collection, and so on. That may be true in a Cloud environment, but in other environments such things may be more expensive to maintain.

Overall I agree with the notion that you should err on the side of logging less rather than more. But if you do have something to log, then (1) do it freely and (2) use a proper logging framework.

⇠ previous page next page ⇢