Anarchy 31/Anarchism and the cybernetics of self-organising systems

From Anarchy
< Anarchy 31
Revision as of 10:42, 3 April 2017 by imported>Ivanhoe
Jump to navigation Jump to search


270

Anarchism and the
cybernetics of self-organising
systems


s1
The intention of this article is to suggest that some of the con­cepts used by cyber­neti­cians study­ing evolv­ing self-organ­ising systems may be relev­ant to anarch­ist theory, and that some of the con­clu­sions drawn from this study tend to favour liber­tarian models of social organ­isa­tion. Much of the spe­cific­ally cyber­netic ma­terial is drawn from lectures given by Gordon Pask and Stafford Beer at Salford College of Advanced Technology. They are not, of course, respons­ible for any con­clu­sions drawn, except where expli­citly stated.

  Firstly, what do we mean by a self-organ­ising system? One defini­tion is simply ‘a system in which to order in­creases as time passes’, that is, in which the ratio of the variety ex­hibited to the max­imum possible variety de­creases; variety being a measure of the com­plex­ity of the system as it appears to an ob­server, the uncer­tainty for the ob­server regard­ing its beha­viour. A system with large variety will have a larger number of pos­sible states than one with smaller variety. Thus such a system may start by ex­hibit­ing very varied beha­viour, e.g. a large number of dif­fer­ent re­sponses to a given stim­ulus may appear equally likely, but over a period of time the heha­viour becomes less erratic, more pre­dict­able—fewer and fewer dis­tinct re­sponses to a given stim­ulus are pos­sible (or, better, have a sig­nific­antly high prob­abil­ity.)

  This def­ini­tion is, however, in osme ways re­strict­ive. The best such a system can do is to reach some sort of op­timum state and stay there. Also, if we regard the system as a control system at­tempt­ing to main­tain stabil­ity in a fluctu­ating en­viron­ment, the types of dis­turb­ance with which it can deal are limited by the fixed max­imum variety of the system. This point will be dealt with later. The essen­tial thing is that unpre­dict­able dis­turb­ances are liable to prove too much for the system.

  Such con­sidera­tions suggest that it would be more fruit­ful to in­corpor­ate in the defini­tion the idea that the max­imum pos­sible variety might also differ at dif­fer­ent times. Thus Pask re­stricts the term to situa­tions where the history of ‘the system’ can best be repre­sented as a series S₀ S₁ … S each term a system with fixed max­imum variety, and each self-organising in the first sense. With this defini­tion we are
271
able to deal with control systems of the type found in living organ­isms. Indeed, with a few limited excep­tions, bio­logical and social organ­isa­tion are, up to now, the only fields in which such control systems can be found. Some of the excep­tions, in the shape of ar­tifi­cially con­structed systems, despite their crude and ele­ment­ary nature in com­par­ison with living organ­isms, do however exhibit re­mark­ably ad­vanced beha­viour, at least in com­par­ison with con­ven­tional con­trol­lers.

  For an example of self-organ­ising beha­viour in this sense, we may con­sider a human being learn­ing to solve certain types of problem, as his beha­viour appears to an ob­server. Over an inter­val the beha­viour may appear self-organ­ising in the first sense. When, however, the learner adopts a new concept or method, there will be a dis­con­tinu­ity in the de­velop­ment of the beha­viour, after which it will again be self-organ­ising in the first sense, for a time, but now in­corpor­ating new pos­sibil­ities, and so on.

  In many dis­cus­sions of control situa­tions the concept of ‘Hier­archy’ appears very quickly. This may tend to make the anarch­ist recoil, but should not do so, since the usage is a tech­nical one and does not co­in­cide with the use of the term in anarch­ist criti­cisms of polit­ical organ­isa­tion.

  Firstly, the cyber­neti­cian makes a very import­ant dis­tinc­tion between two types of hier­archy, the ana­tom­ical and the func­tional, to use the termin­ology adopted by Pask. The former is the type exem­pli­fied in part by hier­arch­ical social organ­isa­tion in the normal sense (e.g. ‘tree of command’ struc­ture in in­dustry), that is: there are two (if two levels) actual dis­tin­guish­able con­crete entit­ies in­volved. The latter refers to the case where there may be only one entity, but there are two or more levels of in­forma­tion struc­ture opera­ting in the system—as for example in some types of neuron networks. A compar­able concept is Melman’s ‘dis­alien­ated de­cision pro­cedure’.[1] This idea might, I think, be sug­gest­ive to anarch­ists.

  Secondly, even in the case of ‘ana­tom­ical hier­archy’, the term only means that parts of the system can be dis­tin­guished dealing with dif­fer­ent levels of de­cision making and learning, e.g. some parts may deal dir­ectly with the en­viron­ment, while other parts relate to activ­ity of these first parts, or some parts learn about indi­vidual occur­rences, while others learn about se­quences of indi­vidual occur­rences, and others again about classes of se­quences.

  Even in the ana­tom­ical sense, then, the term need have none of the con­nota­tions of coer­cive sanc­tions in a ruler-ruled rela­tion­ship which are common in other usages.

  An im­port­ant phe­nomenon in self-organ­ising systems is inter­action between the in­forma­tion flowing in the system and the struc­ture of the system. In a complex system this leads to Redund­ancy of Poten­tial Com­mandit is impos­sible to pick out the crit­ical de­cision-making element, since this will change from one time to another, and depend on the in­forma­tion in the system. It will be evident that this implies that the idea of a hier­archy can have only limited ap­plica­tion in such a system.

272
  I will now attempt to give a brief sketch of a partly arti­ficial self-organ­ising system, in­volv­ing the inter­action be­tween human beings and a machine. This pro­vides ex­amples of the con­cepts intro­duced, and also, I feel, sug­gests import­ant general con­clu­sions about the char­acter­ist­ics of self-organ­ising groups—char­acter­ist­ics which may sound familiar to liber­tari­ans. The machine in ques­tion is a group teach­ing machine de­veloped by Gordon Pask.[2]

  Prior to this Pask had de­veloped indi­vidual teach­ing ma­chines which were import­ant ad­vances in the growth of applied cyber­netics.[3] However, on con­sider­ing the problem of group teach­ing (for skills where some calcul­able measure of the pupils’ per­form­ance, the rate of change of which will serve as a suit­able in­dica­tion of learn­ing, exists), he did not simply combine indi­vidual ma­chines.

  The import­ant insight he had was that a group of human beings in a learn­ing situ­ation, is itself an evolu­tion­ary system, which sug­gested the idea of the machine as a cata­lyst, modi­fy­ing the com­mun­ica­tion chan­nels in the group, and thus pro­ducing dif­fer­ent group struc­tures.

  In the de­velop­ment of the indi­vidual teach­ing ma­chines, the possi­bil­ity of the pupil domin­ating the ma­chine had already arisen. This Pask now ex­tended by intro­ducing the idea of a quality ‘money’ allo­cated to each member of the group, and used by each of them to ‘buy’ for himself control over the commun­ica­tion struc­ture of the group and over the partial spe­cifica­tion of the solu­tion pro­vided by the machine. Now, in the indi­vidual machine, the degree to which the pupil was helped was coupled to change of his degree of success. If he was becom­ing more success­ful then the help given was de­creased. In the group machine, the allo­cation of ‘money’ is coupled to two condi­tions—in­creas­ing success and in­creas­ing variety in the group struc­ture. This second condi­tion is the key to the novelty of the system.

  The system, then, has chan­ging domin­ance and ex­hibits redund­ancy of poten­tial com­mand.

  In practice, each pupil sits in a little cubicle pro­vided with buttons and indic­ators for com­mun­ica­tion, and a com­puter is used for control, calcul­ating the various meas­ures, etc. The oper­ator is pro­vided with some way of seeing what is going on, and can de­liber­ately make things dif­ficult for the group, by intro­ducing false in­forma­tion into the chan­nels, etc., seeing how the group copes with it.

  The prob­lems which Pask, at the time, had used in these group ex­peri­ments had been form­ulated as con­vey­ing in­forma­tion about the posi­tion of a point in some space, with noise in the com­mun­ica­tion chan­nels. The group had been asked to imagine that they are air traffic con­trol­lers, given co-ordin­ates spe­cify­ing the posi­tion of an air­craft at a certain time, for ex­ample.

  He sug­gests, however, that prob­lems of agree­ing on a choice of policy on a basis of agreed facts is not, in prin­ciple, very dif­fer­ent from the case in which ‘the facts’ are in dispute, and there is no ques­tion of adopt­ing any future policy—except of course the policy to adopt in order to ascer­tain the true facts and com­mun­icate them; this being the problem which the group solves for itself. It is in this sense that
273
the group may be re­garded as a de­cision maker.

  It will be noted that the state of the system when in equi­lib­rium is the solu­tion to the problem. Also that this solu­tion changes with time. This is also the case in the first example from purely human organ­isa­tion which oc­curred to me—a jazz band (an example also sug­gested by Pask).

  Pask em­phas­ised that he had not then had the op­portun­ity to obtain suffi­cient data to make any far-reach­ing well sub­stanti­ated gen­eral­isa­tions from these ex­peri­ments. The results he had ob­tained, however, were very inter­est­ing and, I think, give con­sider­able insight into the char­acter­istics of self-organ­ising systems, and their ad­vant­ages over other types of de­cision-makers.

  Some groups, after an initial stage while they were gaining famil­iar­ity with the machine, began as­sign­ing specific roles to their mem­bers and intro­ducing stand­ard pro­cedures. This led to a drop in effi­ciency and in­abil­ity to handle new factors intro­duced by spur­ious inform­ation, etc. The learn­ing curve rises, flat­tens, then drops sharply when­ever some new element is intro­duced. The system is now no longer self-organ­ising.

  Neces­sary charac­ter­istics for a group to con­sti­tuted self-organ­ising system, Pask sug­gests, are avoid­ance of fixed role-assign­ments and stereo­typed pro­ced­ures. This is of course tied up with re­dund­ancy of poten­tial com­mand.

  I think we might sum up ‘fixed role as­sign­ment and stereo­typed pro­ced­ures’ in one word—insti­tu­tional­isa­tion.

  Note that these char­acter­istics are neces­sary, not suffi­cientat the very least the group must first of all con­sti­tute a system in a mean­ing­ful sense; there must be com­mun­ica­tion be­tween the mem­bers, a suffi­cient struc­ture of in­forma­tion chan­nels and feed­back loops.

  The role of the com­puter in Pask’s system may be worry­ing some. Is his not an ana­logue of an author­itar­ian ‘guiding hand’? The answer is, I think, no. It must be re­membered that this is an arti­ficial exer­cise the group is per­form­ing. A problem is set by the oper­ator. There is there­fore no real situ­ation in actu­ality for the group to affect and observe the result of their efforts. It is this func­tion of de­termin­ing and feeding back success/failure in­forma­tion which the machine fulfils.

  The other im­port­ant aspect of the machine as a cata­lyst in the learn­ing process, we have already men­tioned. There is a rough analogy here with the role of ‘influ­ence leader’ in the Hausers’ sense,[4] rather than any author­it­arian ‘over­seer’. I will return to this ques­tion of the role of the machine shortly.

  Regard­ing the group as a de­cision maker, Pask sug­gests that this is perhaps the only sense in which ‘two heads are better than one’ is true—if the ‘two heads’ con­sti­tute a self-organ­ising system. The clue as to why a number of heads, e.g., notori­ously, in com­mit­tees, often turn out to be much worse than one, is, he sug­gests, this busi­ness of role as­sign­ment and stereo­typed pro­ced­ure. He has not, however, sug­gested why this should arise.

  Drawing on know­ledge of beha­viour of a self-organ­ising nature
274
ex­hibited in other groups, e.g. in­formal shop-floor organ­isa­tion, the adapt­abil­ity and effi­ciency ex­hibited in in­stances of col­cect­ive con­tract working, and similar phe­nomena,[5] we may perhaps offer some sug­ges­tions as to how insti­tu­tional­isa­tion may arise in certain types of circum­stances.

  Imagine a work­shop of reason­able size, in which a number of con­nec­ted pro­cesses are going on, and where there is some vari­ation in the factors af­fact­ing the work to be taken into ac­count. There is con­sider­able evid­ence that the workers in such a shop, working as a co-oper­ating group, are able to organ­ise them­selves without outside inter­fer­ence, in such a way as to cope effi­ciently with the job, and show re­mark­able facil­ity in coping with un­fore­see­able diffi­culties and disrup­tions of normal pro­cedure.

  There are two levels of task here:  
  1. The complex of actual pro­duc­tion tasks.
  2. The task of solving the problem of how the group should be organ­ised to perform these first level tasks, and how in­forma­tion about them should be dealt with by the group.

  In situa­tions of the kind I am ima­gin­ing, the organ­isa­tion of the group is largely de­term­ined by the needs of the job, which are fairly obvious to all con­cerned. There is con­tinual feed-back of in­forma­tion from the job to the group. Any un­usual occur­rence will force itself on their notice and will be dealt with ac­cording to their re­sources at that time.

  Purely for the purpose of illus­tra­tion, let us now con­sider the situa­tion of the same type of shop, only this time as­suming that it is organ­ised by a com­mit­tee from outside the shop. The situa­tion in which the com­mit­tee finds itself is com­pletely dif­fer­ent from that of the work group. There are now three levels of problem:  
  1. The prob­lems solved by the indi­vidual workers, i.e. their jobs.
  2. The problem of the organ­isa­tion of the work group.
  3. The problem of the organ­isa­tion of the com­mit­tee itself.

  The de­term­ining success/failure in­forma­tion for all these has still to come from (or at least is sup­posed to come from), the net result of the solu­tion of the first level prob­lems, i.e. the state of pro­duc­tion in the shop.

  The com­mit­tee is denied the con­tinu­ous feed-back which the group had. While working on its solu­tion to the second level problem, it will have no in­forma­tion about the success of its altern­atives, only previ­ous find­ings, coded, in prac­tice, in an inade­quate way. The degree of success will only be observ­able after a trial period after they have decided on a solu­tion. (Also un­usual cir­cum­stances can only be dealt with as types of occur­rence, since they cannot enumer­ate all pos­sibili­ties. This is import­ant in determ­ining the relat­ive effi­ciency of the two methods of organ­isa­tion, but is of less import­ance in our immedi­ate problem.)

275
  It follows that the com­mit­tee cannot solve the third problem by a method ana­logous to that used by the original work group in solving the second level problem; while working on the second level problem the com­mit­tee has no compar­able in­forma­tion avail­able to determ­ine the solu­tion of the third level problem. But they must adopt some pro­ced­ure, some organ­isa­tion at a given time. How then is it to be de­term­ined?
  1. See Seymour Melman: Decision-Making and Productivity (Blackwell, 1958).
  2. Gordon Pask: “Inter­ac­tion between a Group of Sub­jects and an Adapt­ive Auto­maton to produce a Self-Organ­ising System for De­cision-Making” in the sym­posium Self-Organ­ising Systems, 1962, ed. Jovits, Jacobi and Goldstein (Spartan Books).
  3. See Stafford Beer: Cyber­netics and Manage­ment (English Uni­ver­sities Press, 1959) pp. 123-127, and Gordon Pask: An Ap­proach to Cyber­netics (Hutchin­son 1961).
  4. See Richard and Heph­zibah Hauser: The Frat­ernal Society (Bodley Head, 1962).
  5. See, for example, the paper by Trist on col­lect­ive con­tract working in the Durham coal­field quoted by H. Clegg in A New Ap­proach to Indus­trial Demo­cracy (Black­well 1960) and the dis­cus­sion of this book by Geoffrey Oster­gaard in ANARCHY 2. Note the ap­pear­ance of new ele­ments of job rota­tion.
      Despite his empha­sis on the formal aspects of worker organ­isa­tion, Melman’s ana­lysis (see Note 1) of the worker de­cision pro­cess at Standard’s brings out many of the carac­ter­istics of a self-organ­ising system: the evolving nature of the process; the diffi­culty of de­termin­ing where a par­tic­ular de­cision was made; chan­ging domin­ance; the way in which the cumul­ative ex­peri­ence of the group changes the frame of refer­ence against which subse­quent prob­lems are set for solu­tion. A better idea of the gang system from which this derives can, however, be ob­tained from Reg Wright’s articles in ANARCHY 2 & 8.