The The Future of Work Part 3: An Example of Artificial Intelligence

We don’t have a clear definition of artificial intelligence, but we have some examples. One is machine translation, the subject of an article in the New York Times Magazine recently, The Great A.I. Awakening by Gideon Lewis-Krause (the”AI Article”). It’s a beautiful piece of science writing. The author had the opportunity to see how employees of Google developed a neural network machine translation system and implemented it. It’s long, but I highly recommend it. Rather than try to summarize it, I will draw out a few points.

The idea of neural net systems was inspired by our current understanding of the way the human brain works. There about 100 billion neurons in the average brain at birth. As we age, connections among neurons increase, so that each can be connected to as many as 10,000 other neurons. Thus, there are trillions of possible connections. Many of these are pruned as we age because they are not used. Many of the remaining connections are used to maintain the body, and to manage specific human processes, like the endocrine system, or to monitor for balance and pain.

One way to think about AI processes is to see them as pattern-matching systems. Until recently we didn’t have the processing power to handle even a tiny fraction of the brain’s connections, so the early efforts at simulating the brain were bound to fail. On the other hand, computers have been used for the purposes of matching patterns in relatively small sets of data. Here’s a technical example. One of the main lessons of the AI Article is that it takes massive amounts of processing power and massive amounts of data to begin to approach the connective power of the brain, Google also needed new mathematical theories to make it possible.

The astonishing thing is that the number of people needed to create those theories and do a preliminary setup is so small: maybe 10 all told. The full implementation required a team of 100 or so. More people were needed to create a new chip and get it working, and to install the new processors into the Google system, but again, the number seems to be in the hundreds, and it isn’t clear that there were that many new jobs.

The task was made easier by the fact that Google had a huge library of documents translated between languages. These served as training materials for the translation project. Google has a huge library of images, youtubes, and other materials suitable for training. There won’t be many jobs created in this area either.

These are two of the categories of new jobs identified by the White House in the report discussed here. There don’t seem to be many new openings in new fields, but who knows. And there is nothing here likely to create jobs for anyone but the most educated people, though, of course, there may be jobs created in related fields.

The new platform created by these small teams can be adapted for many different problems. Doubtless as those ramp up there will be some new jobs, but it seems unlikely that there will be a hiring burst. Instead, we will see a war of dollars as the big tech companies compete world-wide for the top talent. The AI Article says that the Google team includes people from around the world. We get one or two of the personal stories, and they are amazing.

The AI Article gives a good introduction to the way neural networks work. I caution readers that these parts are metaphorical, and it is unlikely to be useful to try to try to reason with those metaphors, either to extend them, or to make predictions about the future. The metaphor is not the thing. It is merely an aid to understanding the thing for those with little or no background. I link to a couple of pieces here that can be used to gain a deeper understanding of neural networks and deep learning.

At the end of the AI Article, there is a discussion of two possible ways of understanding consciousness. One view sees consciousness as something special beyond the mere physical actions of the brain. It finds it’s origins in the mind-body dualism of Descartes, and is a disparagingly referred to as The Ghost in the Machine. Religious people might see it as the soul, or the Atman; but I’m not sure that’s right. The other view dissolves this problem, and sees consciousness as an emergent phenomenon that arises from the complexity of the connectivity in the brain. The AI Article doesn’t go into this area in much detail.

And yet the rise of machine learning makes it more difficult for us to carve out a special place for us. If you believe, with Searle, that there is something special about human “insight,” you can draw a clear line that separates the human from the automated. If you agree with Searle’s antagonists, you can’t. It is understandable why so many people cling fast to the former view.

For those interested in pursuing this matter, see Consciousness Explained, by Daniel Dennett. The linked Wikipedia article gives a brief description of the book along with Searle’s objections to it.

I don’t know enough to have an opinion about any of this, but I hope other people are thinking about one aspect of this problem. In Western Liberalism, it is a given that there is something special about human beings, and about each of us individually. I don’t know how much of that arises from Christianity, with its emphasis on the relation between, and the likeness of, each individual to the Creator. There is something bound to be unnerving in the combination of a) the idea that our individual selves are just complications of our individual brains, and b) our increasing ability to model that complication in our electronic gear. I don’t have any immediate apocalyptic idea about this, not least for the reasons in this presentation. But every new idea about human beings has been twisted by despots and demagogues for their own purposes. It’s danagerous to pretend that isn’t going to happen with these ideas.

17 replies
  1. atcooper says:

    This will be a dead-end ultimately. At the root, it’s trying to simulate humanity using nothing but logic. If we can’t prove mathematics from logical foundations, or more specifically, prove it given finite time, there’s no way one can transform discrete logic to fuzzy logic to intuition. It may be possible under a whole other paradigm of computing, but not with what we have.

    Another clue, compare the power required for the brain as opposed to the power of the processor. Moore’s law is essentially over – we’ve only been able to extend it a bit with the move to power efficiency. This path will quit bearing fruit eventually. After that, it’s chasing the perfect superconductor again.

    Take the Hemmingway example at the start of that NYT article:

    Kilimanjaro is a snow-covered mountain 19,710 feet high, and is said to be the highest mountain in Africa. Its western summit is called the Masai “Ngaje Ngai,” the House of God. Close to the western summit there is the dried and frozen carcass of a leopard. No one has explained what the leopard was seeking at that altitude.

    NO. 2:

    Kilimanjaro is a mountain of 19,710 feet covered with snow and is said to be the highest mountain in Africa. The summit of the west is called “Ngaje Ngai” in Masai, the house of God. Near the top of the west there is a dry and frozen dead body of leopard. No one has ever explained what leopard wanted at that altitude.

    First, for folks who haven’t read the article, guess which is which, human or AI. Anyone familiar with Hemmingway should be able to do this. I’m not gonna say which it is, but the following paragraph will give some clues.

    The give aways are in construction of the sentence. Hemmingway’s core, the subject and verb placement, and how subordinate clauses are dressed around the core, will tell you which is which. The AI is much sloppier, and throws in unnecessary modifiers, whereas Hemmingway wastes no verbiage. His clauses, in particular, are much more evocative.

    In it’s way, the current group of AI folks are trying to manifest the wisdom of the crowds. All this will do is provide the bare minimum, another race to the bottom.

    • boffotheclown says:

      ” At the root, it’s trying to simulate humanity using nothing but logic. ”


      The problem with this statement is that the point of AI becomes simulating humanity only, although I admit that language translation is mostly an anthropocentric concept.  I think that the real advances in synthetic intelligence will be truly alien.

      • atcooper says:

        I hear you. The next, unstated step, is to point out the map is not the territory. A simulated human is not human. What jobs are these refugees supposed to do with their new AI translators?

        I was a bit shocked when they finally got Go working, so I did some reading, and lo and behold, the computer was trained on real human games, however many it took. To emphasize my point, an AI, as we are being sold, will never write a new Hemmingway novel. There’s not enough of them to build the model required to do so.

        This is a closed system with no capability of learning in any fair sense of the term. All it does is hoover up past data to project into the future. When put like this, I think it seems more obvious the problem. Chomsky is right, and it really does look like the end game is to give the well off the means to fire their servants. It’s like a giant case of engineer’s disease.

        The term AI is disingenuous. And no one says in the marketing material, this is weak AI. It reminds me when I heard of the new 3D movies. Sardonically, I said, oh, we have holodecks now?

        So to conclude, this will work, but for it to work well, it will need curation, enough so that it mitigates the economic value it’s supposedly going to bring to the table.

    • Hermes says:

      ‘The summit of the west’ — sorry to spell it out but that is not the same thing as ‘its western summit’. The translation is a failure right there.

      Very revealing post though.

      • atcooper says:

        The first predicate of the first sentence is a good tell as well. Note the placement of snow and the numerical value of height. The AI, funny enough, gives the number more emphasis.

        Its also telling the test case is full of ‘is’ sentences. That would the English equivalent of equals. Hemmingway at his best will be even harder for the bots to create counterfeits for.

        These bots will have a much harder time with Latinate languages where the subject and verb are the same word.

  2. godfree Roberts says:

    Western Liberalism be damned!
    The Way that can be told of is not an unvarying way;
    The names that can be named are not unvarying names.
    It was from the Nameless that Heaven and Earth sprang;
    The named is but the mother that rears the ten thousand creatures, each after its kind.
    – Tao Te Ching

    Get real.

  3. scribe says:

    Speaking as someone who makes my living doing foreign-language legal work, I can tell you a couple things about Google Translate.

    1.  it is useful for people who want a quick and dirty translation of a news article.  You’ll get most of the meaning.  But it will sometimes be diametrically wrong, meaningwise.  It doesn’t get nuance, sarcasm, or any of the other things which flavor and color language.

    2.  It is forbidden for us to use when doing legal work.  This, because Google stores everything.  If one were to use Google Translate on discovery documents, one would be violating attorney-client privilege.


    More to the point, on meaning, it still requires real humans to process it.  I recall a project I worked on where one of my colleagues couldn’t help but crack up at the content of the emails she was reading.  This, because as a native speaker from the locale whence the writers hailed, she understood not only the meaning, but also the local slang – boontling – that was contained in the emails and got the full flavor of it.  Which was why she was on that part of the project….

    Google will probably work on this for a few decades and get it, but you could do just as well paying native speakers $100 an hour.

    • Ed Walker says:

      I agree that we are a long way from replacing Richard Pevear and Larissa Volokhonsky as translators of great Russion literature, and that there isn’t any particular reason to waste time and money trying to match them. In the AI Article the team head says that the new system is much better than the old one, and I think that’s right, based on my own use to translate French news stories. It is designed to be a work tool, not a literature drafter. It would be fascinating to compare a machine translation of  L’Etranger from French to English with the latest translations. Maybe even a bit of Proust would make a good comparison.

      The crucial point is that with this system in place, adding more processors and more training may make a big difference. Or, we might have to wait for new theories. Either way, this translation is going to improve. The same is true as the platform is adapted to other purposes; which is sort of like the way old systems in nature evolve to serve new goals.


  4. martin says:

    note to self.. file this post under

    1.  Things to remind myself why my soon coming demise is a good thing.

    2.  Why I despise AI programmers .

    3.  Why AI will never understand a blowjob.

    4. The main difference between AI and humans.

    5.  The day AI discovers it will never be loved.

    6.  The day AI discovers it defines the word hideous.

    7.  The day AI discovers why it doesn’t feel emotions.

    8.  The day AI programmers discover the law of unintended consequences.


    Meanwhile, at least the management set isn’t immune to immediate replacement.


    • Ed Walker says:

      I’ll get to that last point next. In the meantime, if you like science fiction, there’s a good book by Charlie Stross, Accelerando, which is part of a trilogy, that explores some of the ideas of exploding AI.

  5. Hermes says:

    I think we are much further away than what is alluded to here. When thinking of the seeming endless nuances of the use of a language,  I am not convinced that arriving at the perfect translating ai would take anything less than a complete simulation of consciousness (or the thing itself).  And since we cannot even get a handle on human consciousness (for example, see here:

    and its followup here:

    I don’t see how a computer could ever be programmed to simulate it (or have it).

    • Ed Walker says:

      Thanks for the interesting links. I read all three, and will look for the next one.

      I’ll be interested to see if they have some conclusion, or just more discussion.

  6. Michael says:

    I would be *very* interested in seeing the results from repeatedly translating a text back and forth between two languages, e.g. German -> English; the translated-English back to German; the German->English->German back to English; and so on. The more iterations accomplished before the text goes to s**t, the better the algo, and vice versa. Of course, the algo(s) should be fed every imaginable type of text – poetry; rocket design; psychology; quantum mechanics; press reports; BLOGS… ad nauseum.

    I’ll wait.

  7. Ron says:

    I was thinking of something similar. Here’s google-translate-telephone starting and ending with English.

    english: the prostitutes peed on the bed in front of the vengeful little man
    french: Les prostituées ont pissé sur le lit devant le vengeur petit homme
    russian: Проститутки мочился на кровати перед мстительным маленького человека
    spanish: Las prostitutas orinó en la cama delante de un pequeño hombre vengativo
    slovak: Prostitútky močil v posteli pred malým pomstychtivá muža
    vietnamese: Gái mại dâm đi tiểu trên giường trước khi người đàn ông nhỏ thù
    yiddish: פּראַסטאַטוץ פּישן אין בעט איידער די ביסל מענטש פייַנט
    swahili: Makahaba kukojoa kitandani kabla kidogo adui mtu
    english: Prostitutes little bed-wetting before one enemy


Comments are closed.