Wheat and politics

Side note: today Russia banned grain exports for the rest of the year, following droughts and wildfires. Many other countries (with the notable exception of the US) have been having major grain production shortfalls as well.

One country which may be particularly affected by this is Egypt; the government there subsidizes effectively free bread, which is a key factor in maintaining some semblance of social stability, especially in Cairo and Alexandria. This bread subsidy is one of the biggest line items in Egypt’s budget, and Russia is their primary grain supplier. This year (and next, if it continues) could push the country’s budget over the edge, and have a significant impact on the government’s stability.

Published in: on August 5, 2010 at 13:39  Comments (2)  
Tags: ,

Design Principles: When to Reimplement

This is another post about design. It’s about a principle which can apply fairly broadly; it could equally be about how to structure an API in a software system, or about how to handle a requirement in a business. Here it is in two flavors:

The software version

If your system’s dependency on another system cannot be expressed through a narrow, stable API, don’t depend on that external system — instead, reimplement it yourself.

The business version

If your business depends on some core function, and you care about the details of how the job is done, rather than just whether it’s done to some simple standard, don’t outsource that function. (i.e., FedEx needs to fly its own planes)

At first glance this may sound extreme; “reimplement it yourself” / “do it in-house” is a tall order for many things you may rely on. But in practice, this sort of decision can be life or death for your system. The reason is that, if you care about how a job is done in detail, you’re going to want to probe into it in depth; you’re going to want detailed controls over the individual steps of the task; you’re going to want to be involved in the day-to-day of the operations to make sure it’s done to your particular need. In terms of software, this means that you won’t be communicating with this system just via a narrow API like “write this data to a file;” you’ll be using complex API’s, bypassing API’s altogether to get internal state, and so on.

As this progresses, you gradually move from using the system to being intimately involved in it, debugging it, and ultimately needing to modify it to your particular needs. But crucially, if you don’t control that system, you can’t do that.

Now, this doesn’t mean that you shouldn’t consider outsourcing the job at first, and moving to in-house when your need to mess with the details grows. But if you’re going to do that, you need to recognize that the design constraints of working with this external system are going to shape your own design from the get-go, and even once you go in-house, the legacy of those decisions will be with you forever. If you are confident enough in the API that you believe that these design choices will be correct even afterwards, and that the changes you’ll make as you go in-house will simply be extensions to that initial relationship, great;  but if you suspect that your needs are going to end up being fundamentally different from the external system, you may want to bite the bullet and do it yourself from the get-go.

There’s an obvious risk in doing this, of course; it’s more expensive, takes more time and money, and doesn’t give you an immediate advantage over a competitor who outsources. But this risk can pay off if you know that you’re going to hit that transition point reasonably soon — that way, a competitor who built around the wrong outsourcing is suddenly going to find themselves in need of a massive redesign, while you’re revealing wonderful new features to the world.

Published in: on July 20, 2010 at 12:30  Comments (3)  
Tags: ,

That’s great, it starts with an earthquake, every 26.2Myrs.

It’s not often that reading a research paper makes me literally jump up in my seat and yell “Holy shit!,” but that’s what I did when I saw figure 1.

A fascinating new paper is up on the arXiv today about the Nemesis Hypothesis. This hypothesis originated in the 1980’s, when people noticed that mass extinctions tended to happen every 27 million years or so, and it was suggested that this may be because some hard-to-see star — perhaps a red dwarf — is orbiting with our sun in a very eccentric orbit, passing close to us only every so often and in the process pulling in hordes of comets from the Oort cloud and bringing about death and destruction. (A second periodicity, of roughly 62M years, also exists; it’s known that these extinctions also seem to be tied to oscillations in the sea level, so comets are considered a less likely source and there’s no analogous Nemesis2 hypothesis that I know of)

What the new paper did was accumulate a very detailed record of extinction rates over the past 500Myrs, using all of the best datasets available, and produce a plot of the fraction of species going extinct per unit time. The Fourier transform of this curve is simply shocking: it shows two extremely clear peaks at periods of 62 and 26.2Myr, respectively. Amazingly, these peaks are clear enough to rule out the Nemesis hypothesis; if the periodicity were due to a star’s motion, it would actually have to be less perfectly regular  than the experimental data, because the star’s motion would be perturbed by other stars, the galactic disk, and so on. (Good summary paper of that)

I was up much later than I should have been reading through these papers and thinking through the results. Some things seem clear:

  1. These guys seem to have done fairly serious numeric analysis. I’m not qualified to evaluate their data sources and prospective issues from that side, so I’ll have to wait for the specialist community to weigh in, but I didn’t see any red flags in the paper. From looking at the Nemesis papers, it seems pretty clear that if their statistical analysis is good, then Nemesis is genuinely ruled out; I couldn’t think of any variations on that hypothesis which would survive this data.
  2. The peaks on the graph in this paper are holy-shit sharp and distinct. Something is happening with clockwork regularity that wipes out most of life on Earth.

I’m now spending some time thinking about this and about what these spikes may mean. One interesting question is whether the 62My and 26.2My peaks are related or if they’re caused by completely disjoint phenomena. Interestingly, there’s another bump on the graph at about 17My, although it looks like it’s just above the level of statistical significance, not quite big enough to tell for sure if it’s a real signal. If there is a bump there, then these bumps have the odd property of being evenly spaced in frequency space, at intervals of about 0.02 My-1. The pattern of frequencies ω0+nω1 is familiar from many differential equations — e.g., it’s the pattern of energy levels of a quantum-mechanical harmonic oscillator — so it’s something which could naturally emerge from a fairly wide range of physical phenomena.

On the other hand, the two could be wholly separate, or they could be the only two real spikes. It’s going to be hard to tell without staring some at the raw data, and even then we may not have enough precision to really know. One thing which I do suspect we’ll be able to determine from this dataset is whether either of the two cycles could be coming from purely biological or other complex systems such as Clathrate guns — such systems seem less likely to have extremely precise and stable periods. Honestly, the first thing that pops into my mind when I see this level of stability is pulsars or astrophysical jets; we don’t know a lot about super-long-period pulsars, but because they have such long periods it would be awfully hard to know a lot about them by nature. Call this the “Cosmic Death Ray Hypothesis.” Other interesting possibilities could be long-frequency oscillations of the Sun, or something resonating in the structure of the Earth… although the latter seems a bit less precise to me.

Definitely time to look at the raw data. A lot of this hypothesizing depends on just how tight the error bounds on this data really are. If they’re as tight as they seem from the graphs, this is an amazing source of data.

The good news: We’re still about 16My from the next predicted peak in this cycle, so we’ve got a little while to figure it out. Before the cosmic death rays come to get us.

ETA: There’s a good post about this on the arXiv blog which gives some more context.

Published in: on July 12, 2010 at 11:16  Comments (4)  
Tags: ,

On the value of hobbies

This bit of news just makes me happy. Not because the scientific advance is so critical (it’s an important advance, and will have significant practical applications, but that isn’t what I like about it) but because, despite having a really stressful and high-level day job, Steve Chu is still relaxing by going into the lab and doing science. And publishing papers in Nature. It’s nice to see that you can still make time for the things you love doing the most, no matter where you end up in your career.

Published in: on July 10, 2010 at 17:28  Comments Off on On the value of hobbies  
Tags: ,

The Blog Is Now Here!

As of now, my blog is here at www.yonatanzunger.com; the old LJ blog is now idle.

There is an LJ syndication of this blog at zunger_rss, for those who want it.

Published in: on July 8, 2010 at 18:09  Comments (1)  
Tags:

New reading material

Amy has a new blog, The Practical Free Spirit. It’s all about doing big, crazy things (having a career in the arts, trying to change the world, etc) from a very practical perspective, and I recommend it. Today’s post, for example, is on dealing with disappointment.

Published in: on July 8, 2010 at 12:37  Comments Off on New reading material  
Tags:

Fallingwater, further thoughts

One interesting aspect of the structure of Fallingwater which I’m seeing much more clearly after building this model is how each level of the building has the logical structure of a “pool.” The floor layouts are open and in slightly irregular shapes, with edges and patterns which mimic the flow of water; the connections between levels, both inside the building and outside (through the various paths and balconies) indicate the natural way in which water would fall from one level to another in a natural system of pools on a cliffside. The windows serve to both open up the space and let in light, and to highlight this building / water relationship. (The model designer nicely used the same brick type for the water and the windows, which really makes this vivid) The river at the bottom simply seems to be the lowest level of this stacking. The overall effect is that the house is like a permanent (but oddly well-sheltered) camp in the hills.

I think I need to get out to Mill Run sometime and see this in person.

Published in: on July 6, 2010 at 15:18  Comments Off on Fallingwater, further thoughts  
Tags:

Organic architecture

I’m spending part of this Fourth of July holiday building a Lego model of one of the great works of American architecture, Fallingwater. Building this is a fascinating process; it’s from a plan worked out by a professional, and he did an excellent job of conveying a lot of Frank Lloyd Wright’s key ideas in the building process. For example, one begins by building up the landscape; the point at which you begin building up the house proper is only clear in retrospect, the house grows out of the environment so seamlessly. Then you assemble a construction which is unambiguously “house;” but when you attach it to the already-laid foundation, the boundary again becomes confusing. The wall of the house could just as easily be a rock escarpment; the window, a waterfall.

It’s giving me a real appreciation for FLW’s work on this house. I need to walk around and look at some other houses and see how they handle the relationship of the structure to its environment; I suspect that a big part of the reason that so many suburban houses look, well, so suburban is that they have no clear relationship to it at all, and look rather like they got dropped on an otherwise empty lawn by aliens.

Published in: on July 5, 2010 at 15:02  Comments (6)  
Tags:

Economics thoughts

Technical rambly post.

So last night I started reading MWG on microeconomics. One of the things which struck me was their use of a rather artificial-feeling mathematical framework, with consumption being a function of prices (a vector in an L-dimensional space) and of wealth (a single real number). Various bits of math follow from the statement that consumption is homogenous of degree zero as a function of these two sets of variables, which is just the statement that prices are only meaningful relative to overall wealth.

What’s a bit unnatural is the division of price and wealth into two separate variables, and the equations all reflect this. It seems a great deal more natural to merge these into an L+1-dimensional vector, with “commodity zero” being money. This is nice both mathematically (the equations are suddenly a lot more compact) and conceptually (it makes it a lot easier to think about, say, multiple kinds of money flowing around in a system) The Walras axiom then takes the form that the aggregate consumption of money over time is equal to total wealth, i.e. ultimately people spend all of their money.

But this led me to two questions which I think still need some pondering.

  1. In this context, the Walras axiom no longer seems so obvious, especially when you consider that there could be multiple “money-like” commodities in the system. What is special about money that causes people to ultimately spend all of it? In a utility model, I could see that money would be a utility-zero commodity, so if there’s anything with positive value to spend it on you would probably do so. (At least, so long as all interactions are linear — but I think that you can prove that they always are) But this non-obviousness suggests that there may be a more interesting way to phrase the axiom which ties more directly to the way that people relate to money.
  2. Once you start to treat money as Yet Another Commodity, the arbitrariness of using it as the scale for all the other variables seems significantly more obvious. Not in the moral sense, where it was pretty obvious to begin with, but simply mathematically; the choice of a preferred axis in commodity space seems almost perverse. One interesting alternative way to model things (which fits more naturally with choice models) would be to think about pairwise exchange costs rather than overall numerical costs — i.e., to think of everything as barter, with money simply a highly fungible good. What’s interesting is that this is significantly more general than numerical costs, in the same way that choice models are more general than preference models; it lets you model things such as nonfungible goods. (Money can buy time, but can’t necessarily buy loyalty; on the other hand, loyalty can buy loyalty) I suspect that there are some interesting techniques possible here — has this area been explored?
Published in: on June 17, 2010 at 08:25  Comments (4)  
Tags:

Well, crap.

It looks like the US may have actually managed to do something which will change the situation in Afghanistan in the long term, not just the short term: discovered large mineral deposits.

It’s going to take a while to process the potential implications of this. Afghanistan has been an isolated place, ruled by tribal warlords and resisting any lasting change from foreign invasions for the past 2,300 years, in no small part because it has so little value to a conqueror; its positional strategic value is limited by the fact that it’s so damned difficult to hold and to cross, its natural resources were nil, and it had little population. People would invade it as a buffer zone (Brezhnev), or to get from one place to another (Alexander, Genghis Khan, Tamerlane) or to deal with some group causing trouble (Auckland, Lytton, Bush), but nobody ever held it for a long period of time.

But now there’s an estimated $1T of resources in the ground. On the one hand, local warlords are going to want to get in on the action; but they don’t have anything like the technical or logistical capability to extract resources effectively and sell them on the market. That suggests “large foreign investment,” which would normally be a euphemism for large companies setting up shop and extracting whatever they can, leaving behind as little as possible… but in an area quite as heavily-armed as this one, the normal techniques of this won’t work. I could imagine Western companies coming in if they were backed by a heavy mercenary force, or Chinese companies coming in backed by government troops. Western forces would be backed by governmental forces too, primarily US, assuming that the US had any sense in this — because if there are that many resources in the area, on top of its location, this place suddenly got a great deal more strategic, and keeping it out of the wrong hands (such as China’s) is an important policy goal. Russia is obviously going to want in as well, and I’ll bet that they’re going to use their other resources in Central Asia (e.g., their ability to secure countries where the US needs to maintain military bases to support operations in Afghanistan) in order to ensure that they get it.

Looks like it may be time for another Great Game in the area. I do wonder exactly when people realized the extent of resources available — it may shed some interesting light on the decisions people have been making over the past several years.

Published in: on June 13, 2010 at 21:26  Comments (14)  
Tags: , ,