Total Horse Takeover

Link post

I hear a lot of talk of ‘tak­ing over the world’. What is it to take over the world? Have I done it if I am king of the world? Have I done it if I burn the world? Have hu­mans or the print­ing press or Google or the idea of ‘cur­rency’ done it?

Let’s start with some­thing more tractable, and be clear on what it is to take over a horse.

A nat­u­ral the­ory is that to take over a horse is to be the ar­biter of ev­ery­thing about the horse —to be the one de­cid­ing the horse’s ev­ery mo­tion.

But you prob­a­bly don’t ac­tu­ally want to con­trol the horse’s ev­ery mo­tion, be­cause the horse’s own abil­ity to move it­self is a large part of its value-add. Flac­cid horse mass isn’t that helpful, not even if we throw in the horse’s phys­i­cal strength to move it­self ac­cord­ing to your com­mands, and some sort of mag­i­cal abil­ity for you to com­mu­ni­cate mus­cle-level com­mands to it. If you were in com­mand of the horse’s ev­ery mus­cle, it would fall over. (If you di­rected its cel­lu­lar pro­cesses too, it would die; if you con­trol­led its atoms, you wouldn’t even have a dead horse.)

In­for­ma­tion and com­put­ing capacity

The rea­son this isn’t so good is that bal­anc­ing and ma­neu­ver­ing a thou­sand pounds of fast-mov­ing horse flesh bal­anced on flex­ible sup­ports is prob­a­bly hard for you, at least via an in­ter­face of in­di­vi­d­ual mus­cles, at least with­out more prac­tice be­ing a horse. I think for two rea­sons:

  • Lack of in­for­ma­tion e.g. about ex­actly where ev­ery part of the horse’s body is and where its hoofs are touch­ing the ground how hard

  • Lack of com­put­ing power to ded­i­cate to calcu­lat­ing de­sired horse mus­cle mo­tions from the above in­for­ma­tion and your de­sired high level horse behavior

(Even if you have these things, you don’t ob­vi­ously know how to use them to di­rect the horse well, but you can prob­a­bly figure this out in finite time, so it doesn’t seem like a re­ally fun­da­men­tal prob­lem.)

Ten­ta­tive claim: hold­ing more lev­ers is good for you only in­so­far as you have the in­for­ma­tion and com­put­ing ca­pac­ity to calcu­late which di­rec­tions you should want those lev­ers pushed.

So, you seem to be get­ting a lot out of the horse and var­i­ous horse sub­com­po­nents mak­ing their own de­ci­sions about steer­ing and bal­ance and breath­ing and snort­ing and mi­to­sis and where elec­trons should go. That is, you seem to be get­ting a lot out of not be­ing in con­trol of the horse. In fact so far it seems like the more you are in con­trol of the horse in this sense, the worse things go for you.

Is there a bet­ter con­cept of ‘tak­ing over’—a horse, or the world—such that some­one rel­a­tively non-om­ni­scient might ac­tu­ally benefit from it? (Maybe not—maybe ex­treme con­trol is just bad if you aren’t near-om­ni­scient, which would be good to know.)

What rid­ing a horse is like

Per­haps a good first ques­tion: is there any sort of power that won’t make things worse for you? Surely yes: train­ing a horse to be rid­den in the usual sense seems like ‘hav­ing con­trol over’ the horse more than you would oth­er­wise, and seems good for you. So what is this kind of con­trol like?

Well, maybe you want the horse to go to Lon­don with you on it, so you get on it and pull the reins to di­rect it to Lon­don. You don’t run into the prob­lems above, be­cause aside from di­rect­ing its walk­ing to­ward Lon­don, it sticks to its nor­mal pat­terns of ac­tivity pretty closely (for in­stance, it con­tinues breath­ing and keep­ing its body in an up­right po­si­tion and do­ing walk­ing mo­tions in roughly the di­rec­tion its head is pointed).

So maybe in gen­eral: you want to com­mand the horse by giv­ing it a high level goal (‘take me to Lon­don’) then you want it to do the backchain­ing and fill in all the de­tails (move right leg for­ward, hop over this log, breathe..). That’s not quite right though, be­cause the horse has no abil­ity to chart a path from here to Lon­don, due to its ig­no­rance of maps and maybe Lon­don as a con­cept. So you are hop­ing to do the first step of the backchain­ing—figure out the route—and then to give the horse slightly lower level goals such as, ‘turn left here’, ‘go straight’, and for it to do the rest. Which still sounds like giv­ing it a high level goal, then hav­ing it fill in the in­stru­men­tal sub­goals and do them.

But that isn’t quite right. You prob­a­bly also want to steer the de­tails there some­what. You are mo­ment-to-mo­ment ad­just­ing the horse’s mo­tion to keep you on it, for in­stance. Or to avoid scar­ing some chick­ens. Or to keep to the side as an­other horse goes by. While not steer­ing it en­tirely, at that level. You are rely­ing on its own abil­ity to avoid rocks and holes and to dodge if some­thing flies to­ward it, and to put some effort into keep­ing you on it. How does this fit into our sim­ple model?

Per­haps you want the horse to be­have as it would—rather than sud­denly leav­ing ev­ery de­ci­sion to you—but for you to be able to ad­just any as­pect of it, and have it again work out how to sup­port that change with lower level choices. You push it to the left and it finds new places to put its feet to make that work, and ad­justs its breath­ing and heart rate to make the foot mo­tions work. You pull it to a halt, and it changes its leg mus­cle taut­nesses and heart rate and breath­ing to make that work.

Levers

On this model, in prac­tice your power is limited by what kinds of changes the horse can and will fill in new de­tails for. If you point its head in a new di­rec­tion, or ask it to sit down, it can prob­a­bly re­calcu­late its finer mo­tions and sup­port that. Whereas if you de­cide that it should have have holes in its legs, it just doesn’t have an af­for­dance for do­ing that. And if you do it, it will bleed a lot and run into trou­ble rather than chang­ing its own blood­flow. If you de­cide it should move via a gi­ant horse-sized bi­cy­cle, it prob­a­bly can’t sup­port that, even if in prin­ci­ple its phys­iol­ogy might al­low it. If you hold up one of its legs so its foot is high in the air, it will ‘sup­port’ that change by mov­ing its leg back down again, which is per­haps not what you were go­ing for.

This sug­gests that tak­ing over a thing is not zero sum. There is not a fixed amount of con­trol to be had by in­ten­tional agents. Be­cause per­haps you have all the con­trol that any­one has over a horse, in the sense that if the horse ever has a choice, it will try to sup­port your com­mands to it. But still it just doesn’t know how to con­trol its own heart rate con­sciously or ride a gi­ant horse-sized bi­cy­cle. Then one day it learns these skills, and can let you ad­just more of its ac­tions. You had all the con­trol the whole time, but all be­came more.

Consequences

One is­sue with this con­cept of tak­ing over is that it isn’t clear what it means to ‘sup­port’ a change. Each change has a num­ber of con­se­quences, and some of them are the point while oth­ers are un­de­sir­able side effects, such that avert­ing them is an in­te­gral part of sup­port­ing the change. For in­stance, mov­ing legs faster means us­ing up blood oxy­gen and also trav­el­ing faster. If you gee up the horse, you want it to sup­port this by re­plac­ing the miss­ing blood oxy­gen, but not to jump on a tread­mill to offset the faster travel.

For the horse to get this right in gen­eral, it seems that it needs to know about your higher level goals. In prac­tice with horses, they are just built so that if they de­cide to run faster their res­pi­ra­tory sys­tem sup­plies more oxy­gen and they aren’t struck by a com­pul­sion to get on a tread­mill, and if that weren’t true we would look for a differ­ent an­i­mal to ride. The fact that they always as­sume one kind of thing is the goal of our in­ter­ven­tion is fine, be­cause in prac­tice we do ba­si­cally always want legs for mo­tion and never for us­ing up oxy­gen.

Maybe there is a sys­tem­atic differ­ence be­tween de­sir­able con­se­quences and ones that should be offset—in the ex­am­ples that I briefly think of, the de­sir­able con­se­quences seem more of­ten to do with re­la­tion­ships with larger scale things, and the ones that need offset­ting are to do with in­ter­nal things, but that isn’t always true (I might travel be­cause I want to be healthier, but I want to be in the same re­la­tion­ship with those who send me mail). If the situ­a­tion seems to turn in­puts into out­puts, then the out­puts are of­ten the point, though that is also not always true (e.g. a garbage burner seeks to get rid of garbage, not cre­ate smoke). Both of these also seem maybe con­tin­gent on our world, whereas I’m in­ter­ested in a gen­eral con­cept.

To­tal takeover

I’ll set that aside, and for now define a de­sir­able model of con­trol­ling a sys­tem as some­thing like: the sys­tem be­haves as it would, but you can ad­just as­pects of the sys­tem and have it sup­port your ad­just­ment, such that the ad­just­ment for­wards your goals.

There isn’t a clear no­tion of ‘all the con­trol’, since at any point there will be things that you can’t ad­just (e.g. cur­rently the shape of the horse’s mi­to­chon­dria, for a long time the re­la­tion­ship be­tween space and time in the horse sys­tem), ei­ther be­cause you or the sys­tem don’t have a means of mak­ing the ad­just­ment in­ten­tion­ally, or the sys­tem can’t sup­port the ad­just­ment use­fully. How­ever ‘all of the con­trol that any­one has’ seems more straight­for­ward, at least if we define who is counted in ‘any­one’. (If you can’t con­trol the viral spread, is the virus a some­one who has some of the uni­verse’s con­trol?)

I think whether hav­ing all of the con­trol at a par­tic­u­lar time gets at what I usu­ally mean by hav­ing ‘taken over’ de­pends on what we ex­pect to hap­pen with new av­enues of con­trol that ap­pear. If they au­to­mat­i­cally go to who­ever had con­trol, then hav­ing all of the con­trol at one time seems like hav­ing taken over. If they get dis­tributed more ran­domly (e.g. the horse learns to ride a bi­cy­cle, but keeps that power for it­self, or a new agent is cre­ated with a power), so that your frac­tion of con­trol de­te­ri­o­rates over time, that seems less like hav­ing taken over. If that is how our world is, I think I want to say that one can­not take it over.

***

This was a lot of ab­stract rea­son­ing. I es­pe­cially wel­come cor­rec­tion from some­one who feels they have suc­cess­fully con­trol­led a horse to a non-neg­ligible de­gree.



from Hacker News https://ift.tt/2NlKzlw