Posts

Breathing Room: Thanksgiving Day Emergency Cooking Aid

[NB: check the byline, thanks. /~Rayne]

Someone in my social media feed suggested in a few years it would be obvious artificial intelligence was to the internet what microplastics are to the environment.

Another pundit cracked wise yesterday about AI, wondering how many cooking disasters would happen today because folks relied on answers they found on the internet.

The amount of crap out there on the internet generated by AI is already dangerous. It’s not helped by the business models search engines and browsers use to elevate content.

The biggest challenges on Thanksgiving are generally about cooking a turkey. Butterball brand has answered related questions for years now and is trustworthy source because their brand is at stake – they’re committed to your positive turkey cooking experience.

But here’s the nature of the problem: if you should search Google for “Butterball turkey how to” the top result is goddamned dead bird site X.

You can’t blindly trust cooking information off X right now; there’s simply too much false information and spoofed accounts. This scenario should also tell you something about X: they spent a huge wad of cash to be the first search result instead of spending money on moderation to fight back the proliferation of crap on X.

Go directly to Butterball.com instead, double checking the URL to make sure you didn’t enter a typo.

The same goes for instructions on any brand name product – don’t search for them without first going directly to the brand’s site.

The next best alternative for information is going directly to food and cooking sites you’re already familiar with and trust – like

foodnetwork.com

bonappetit.com

allrecipes.com

whatscookingamerica.net

seriouseats.com

thekitchn.com

marthastewart.com

saveur.com.

These links are not endorsements, merely shared for ease of access and AI avoidance. One personal exception is whatscookingamerica.net — this has been my favorite site for cooking prime rib and standing rib roast. Never had a bad meal relying on their information.

~ ~ ~

If you’re one of those folks who need help today, feel free to ask in comments here if you don’t trust the results you’ve received online. The community here is pretty good at finding factual material.

I’ll offer something tried-and-true which I’ve made in volumes to keep on my shelf after forgetting to buy pumpkin pie spice mix one year but needing it at the last minute.

Pumpkin Pie Spice Mix

Per pie:

1 teaspoon ground cinnamon
1/4 teaspoon ground nutmeg
1/4 teaspoon ground ginger
1/8 teaspoon ground cloves

Sometimes I add more ginger and cloves for a spicier tasting pie.

Don’t have pumpkin pie spice mix, but also don’t have all those ingredients? Do what you can with what you have on hand, be creative. Add more vanilla to the pumpkin batter, maybe even add some finely grated lemon rind to replace the ginger.

And if your guests don’t care for the result, blame AI search results.

Then go to penzeys.com and order their pumpkin pie spice mix to restock your shelf. That’s another tried-and-true resource.

One more warning: there has been a proliferation of phishing in search engines using the same technique the dead bird site applied, only buying “sponsored” slots at the top of results pages.

As a rule I never, ever click on “sponsored” links anymore. Too many have been bought by crooks who spoof a similar-looking domain name, use nearly identical content matching real brands, and then attack folks who click on their fallacious link.

~ ~ ~

Other emergencies which may occur on Thanksgiving:

– If you are going to attend a family event at which there are persons who make you uncomfortable, plan ahead for an emergency exit. Organize an “emergency” call or text placed by a trusted friend at a specific time(s) so that you can gracefully leave before things get worse.

– Fires are the most common accident on Thanksgiving. Make sure flammable non-food items are kept clear of the stove or grill. Have a box of baking SODA (never powder) or salt for small grease fires, sprinkling the powder directly on the flames and not from the side. A pot lid may also work to suffocate small fires. Keep ready an appropriate fire extinguisher on hand though once used the cooking area will be contaminated. Better to prevent fires including avoiding overloading electrical outlets and power strips. Can’t hurt to read this overview before getting too deep in the kitchen: https://lifehacker.com/how-to-put-out-every-kind-of-kitchen-fire-1849732334

– Burns and cuts are the most common kitchen injuries on Thanksgiving Day. Make sure you have your first aid kit at hand. Go to this link for more information about treating injuries: https://www.webmd.com/first-aid/kitchen-first-aid

– Know where to take injured if the person needs more help than you can offer. What’s the closest emergency room or urgent care facility? Have you checked the route from your house for recent road construction? I mention this because a couple friends have had to drive to the ER during the last two weeks, one in the middle of the night. Thank goodness there wasn’t any impediment on the road.

– Learn unexpectedly you’ve been around someone who has COVID? Get away from them, leave closed spaces. Viral load matters; the more virus particles, the sicker you are likely to become even with a vaccine or booster, though vaccine/booster makes it far more likely you’ll have a mild to asymptomatic case. Gargle with salt water and use a saline nasal spray immediately after an unexpected exposure to reduce the amount of virus particles in your throat; saline lavage regimens have reduced illness due to COVID (see https://acaai.org/news/new-study-gargling-with-salt-water-may-help-prevent-covid-hospitalization/). And for gods’ sake wear an N95 mask in public shared spaces with persons whose health status you don’t know because the pandemic isn’t over no matter how much corporations want you to believe otherwise.

~ ~ ~

Got any tips you want to share for last-minute problems on Turkey Day? Share in this open thread.

Facebook, Hot Seat, Day Two — House Energy & Commerce Committee Hearing

This is a dedicated post to capture your comments about Facebook CEO Mark Zuckerberg’s testimony before the House Energy & Commerce Committee today.

After these two hearings my head is swimming with Facebook content, so much so that I had a nightmare about it overnight. Today’s hearing combined with the plethora of reporting across the internet is only making things more difficult for me to pull together a coherent narrative.

Instead, I’m going to dump some things here as food for further consideration and maybe a possible future post. I’ll update periodically throughout the day. Do share your own feedback in comments.

Artificial Intelligence (AI) — every time Mark Zuckerberg brings up AI, he does so about a task he does not want to employ humans to do. Zuckerberg doesn’t want to hire humans even if it means doing the right thing. There are so many indirect references to creating automated tools that are all substitutions for labor that it’s obvious Facebook is in part what it is today because Facebook would rather make profits than hire humans until it is forced to do otherwise.

Users’ control of their data — this is bullshit whenever he says it. If any other entity can collect or copy or see users’ data without explicit and granular authorization, users do not have control of their data. Why simple controls like granular read/not-read settings on users’ data operated by users has yet to be developed and implemented is beyond me; it’s not as if Facebook doesn’t have the money and clout to make this happen.

Zuckerberg is also evasive about following Facebook users and nonusers across the internet — does browsing non-Facebook website content with an embedded Facebook link allow tracking of persons who visit that website? It’s not clear from Zuckerberg’s statements.

Audio tracking — It’s a good thing that Congress has brought up the issue of “coincident” content appearing after users discuss topics within audible range of a mobile device. Rep. Larry Buschon (R-Indiana) in particular offered pointed examples; we should remain skeptical of any explanation received so far because there are too many anedotes of audio tracking in spite of Zuckerberg’s denials.

Opioid and other illegal ads — Zuckerberg insists that if users flag them, ads will be reviewed and then taken down. Congress is annoyed the ads still exist. But at the hear of this exchange is Facebook’s reliance on users performing labor Facebook refuses to hire to achieve the expected removal of ads. Meanwhile, Congress refuses to do its own job to increase regulations on opioids, choosing instead to flog Facebook because it’s easier than going after donors like Big Pharma.

Verification of ad buyers — Ad buyers’ legitimacy based on verification of identity and physical location will be implemented for this midterm election cycle, Zuckerberg told Congress. Good luck with that when Facebook has yet to hire enough people to take down opioid ads or remove false accounts of public officials or celebrities.

First Amendment protections for content — Congressional GOP is beating on Facebook for what it perceives as consistent suppression of conservative content. This is a disinfo/misinfo operation happening right under our noses and Facebook will cave just like it did in 2016 while news media look the other way since the material in question isn’t theirs. Facebook, however, has suppressed neutral to liberal content frequently — like content about and images featuring women breastfeeding their infants — and Congress isn’t uttering a peep about this. Congress also isn’t asking any questions about Facebook’s assessments of content

Connecting the world — Zuckerberg’s personal desire to connect humans is supreme over the nature and intent of the connections. The ability to connect militant racists, for example, takes supremacy (literally) over protecting minority group members from persecution. And Congress doesn’t appear willing to see this as problematic unless it violates existing laws like the Fair Housing Act.

More to come as I think of it. Comment away.

UPDATE — 2:45 PM EDT — I’m gritting my teeth so hard as I listen to this hearing that I’ve given myself a headache.

Terrorist content — Rep. Susan Brooks (R-Indiana) asked about Facebook’s handling of ISIS content, to which Zuckerberg said a team of 200 employees focus on counterintelligence to remove ISIS and other terrorist content, capturing 99% of materials before they can be see by the public. Brooks further asked what Facebook is doing about stopping recruitment.

What. The. Fuck? We’re expecting a publicly-held corporation to do counterintelligence work INCLUDING halting recruitment?

Hate speech — Zuckerberg used the word “nuanced” to describe the definition while under pressure by left and right. Oh, right, uh-huh, there’s never been a court case in which hate speech has been defined…*head desk*

Whataboutism — Again, from Michigan GOPr Tim Walberg, pointing to the 2012 Obama campaign…every time the 2012 campaign comes up, you know you are listening to 1) a member of Congress who doesn’t understand Facebook’s use and 2) is working on furthering the disinfo/misinfo campaign to ensure the public thinks Facebook is biased against the GOP.

It doesn’t help that Facebook’s AI has failed on screening GOP content; why candidates aren’t contacting a human-staffed department directly is beyond me. Or why AI doesn’t interact directly with campaign/candidate users at the point of data entry to let them know what content is problematic so it can be tweaked immediately.

Again, implication of discrimination against conservatives and Christians on Facebook — Thanks, Rep. Jeff Duncan, waving your copy of the Constitution insisting the First Amendment is applied equally and fairly. EXCEPT you’ve missed the part where it says CONGRESS SHALL MAKE NO LAW respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press…

The lack of complaints by Democratic and Independent representatives about suppression of content should NOT be taken to mean it hasn’t happened. That Facebook allowed identified GOP-voting employees to work with Brad Parscale means that suppression happens in subtle ways. There’s also a different understanding between right and left wings about Congress’ limitation under the First Amendment AND Democrats/Independents aren’t trying to use these hearings as agitprop.

Internet service — CONGRESS NEEDS TO STOP ASKING FACEBOOK TO HELP FILL IN THE GAPS BETWEEN NETWORKS AND INTERNET SERVICE PROVIDERS THEY HAVE FAILED TO REGULATE TO ENSURE BROADBAND EVERYWHERE. Jesus Christ this bugs the shit out of me. Just stop asking a corporation to do your goddamned jobs; telcos have near monopoly ensured by Congress and aren’t acting in the best interest of the public but their shareholders. Facebook will do the same thing — serve shareholders but not the public interest. REGULATE THE GAP, SLACKERS.

3:00 PM thank heavens this beating is over.

Three more thoughts:

1) Facial recognition technology — non-users should NEVER become subjected to this technology, EVER. Facebook users should have extremely simple and clear opt-in/opt-out on facial technology.

2) Medical technology — absolutely not ever in social media. No. If a company is not in the business of providing health care, they have no business collecting health care data. Period.

3) Application approval — Ask Apple how to do it. They do it, app by app. Facebook is what happens when apps aren’t approved first.

UPDATE — 9:00 PM EDT — Based on a question below from commenter Mary McCurnin about HIPAA, I am copying my reply here to flesh out my concerns about Facebook and medical data collection and sharing:

HIPAA regulates health data sharing between “covered entities,” meaning health care clearinghouses, employer-sponsored health plans, health insurers, and medical service providers. Facebook had secretly assigned a doctor to work on promoting a proposal to some specific covered entities to work on a test or beta; the program has now been suspended. The fact this project was secret and intended to operate under a signed agreement rather than attempting to set up a walled-off Facebook subsidiary to work within the existing law tells me that Facebook didn’t have any intention of operating within HIPAA. The hashing concept proposed for early work but still relying on actual user data is absurdly arrogant in its blow off of HIPAA.

Just as disturbing: virtually nothing in the way of questions from Congress about this once-secret program. The premise which is little more than a normalized form of surveillance using users’ health as a criteria is absolutely unacceptable.

I don’t believe ANY social media platform should be in the health care data business. The breach of U.S. Office of Personnel Management should have given enough Congress enough to ponder about the intelligence risks from employment records exposed to foreign entities; imagine the risks if health care data was included with OPM employment information. Now imagine that at scale across the U.S., how many people would be vulnerable in so many ways if their health care information became exposed along with their social records.

Don’t even start with how great it would be to dispatch health care to people in need; we can’t muster the political will to pay for health care for everybody. Why provide monitoring at scale through social media when covered entities can do it for their subscriber base separately, and apparently with fewer data breaches?

You want a place to start regulating social media platforms? Start there: no health care data to mingle with social media data. Absolutely not, hell to the no.

[Photo: Paul Rysz via Unsplash]

Three Things: Eclipsed, Killer Robots, Back to the Salt Mines [UPDATED]

I’ve been trying to write all morning but I’ve been interrupted so many times by people looking for information about eclipse viewing I’m just going to post this in progress.

Mostly because I’m also helping my kid rig an eclipse viewer — lots of tape, binder clips and baling wire.

~ 3 ~

As you’ve no doubt heard, much of the U.S. will experience a solar eclipse over the next three hours. It’s already begun on the west coast, just passing totality right now in Oregon; the eclipse started within the last 25 minutes in Michigan. And as you’ve also heard, it is NOT safe to look directly at the sun with the naked eye or sunglasses. A pinhole viewer is quick and safe to make for viewing. See NASA’s instructions here and more eclipse safe viewing info here.

You can also watch NASA’s live stream coverage on Twitch TV.

We are also experiencing one of NASA’s most important services: public education about our planet and science as a whole, of particular value to K-12 educators. We can’t afford to defund this valuable service.

At this point you may imagine me on my deck holding a Rube Goldberg contraption designed to view the early partial eclipse we’ll see in Michigan — only 77% or so coverage.

~ 2 ~

KILLER ROBOTS: There’s been a fair amount of coverage this week touting Elon Musk’s call to ban ‘killer robots’. Except it’s not just Elon Musk, it’s a consortium of more than 100 technology experts which published an open letter asking the United Nations to restrain the development of ‘Lethal Autonomous Weapon Systems’ (LAWS).

I’ve pooh-poohed before the development of new military technology, mostly because DARPA doesn’t seem to be as fast at it as non-military researchers. Exoskeletons are the best example I can think of. But whether DARPA, the military, military contractors, or other non-military entities develop them, AI-enabled LAWS are underway.

More importantly, we are very late to dealing with their potential risks.

Reading about all the Musk-ban-killer-robots pieces, I recalled an essay by computer scientist Bill Joy:

… The 21st-century technologies – genetics, nanotechnology, and robotics (GNR) – are so powerful that they can spawn whole new classes of accidents and abuses. Most dangerously, for the first time, these accidents and abuses are widely within the reach of individuals or small groups. They will not require large facilities or rare raw materials. Knowledge alone will enable the use of them.

Thus we have the possibility not just of weapons of mass destruction but of knowledge-enabled mass destruction (KMD), this destructiveness hugely amplified by the power of self-replication.

I think it is no exaggeration to say we are on the cusp of the further perfection of extreme evil, an evil whose possibility spreads well beyond that which weapons of mass destruction bequeathed to the nation-states, on to a surprising and terrible empowerment of extreme individuals.

Nothing about the way I got involved with computers suggested to me that I was going to be facing these kinds of issues. …

He wrote this essay, The Future Doesn’t Need Us, in April 2000. Did we blow him off then because the Dot Com bubble had popped, and/or our heads hadn’t yet been fucked with by post-9/11’s hyper-militarization?

This part of his essay is really critical:

… Kaczynski’s dystopian vision describes unintended consequences, a well-known problem with the design and use of technology, and one that is clearly related to Murphy’s law – “Anything that can go wrong, will.” (Actually, this is Finagle’s law, which in itself shows that Finagle was right.) Our overuse of antibiotics has led to what may be the biggest such problem so far: the emergence of antibiotic-resistant and much more dangerous bacteria. Similar things happened when attempts to eliminate malarial mosquitoes using DDT caused them to acquire DDT resistance; malarial parasites likewise acquired multi-drug-resistant genes.2

The cause of many such surprises seems clear: The systems involved are complex, involving interaction among and feedback between many parts. Any changes to such a system will cascade in ways that are difficult to predict; this is especially true when human actions are involved. …

The Kaczynski he refers to is Ted “Unabomber” Kaczynski, who Joy believes was a criminally insane Luddite. But Kaczynski still had a valid point. Remember StuxNet’s escape into the wild? In spite of the expertise and testing employed to thwart Iran’s nuclear aspirations, they missed something rather simple. In hindsight it might have been predictable but to the experts it clearly wasn’t.

Just as it wasn’t obvious to computer scientists over more than a decade to close every possible port — including printer and server maintenance ports — regardless of operating system so that ransomware couldn’t infect systems. Hello, WannaCry/Petya/NotPetya…

We’ve already seen photos and videos of individuals weaponizing drones — like this now-five-year-old video of an armed quadrotor drone demonstrated by a friendly chap, FPSRussia — the military-industrial complex cannot and should not believe it has a monopoly on AI-enabled LAWS if these individuals have already programmed these devices. And we don’t even know yet how to describe what they are in legal terms let alone how to limit their application, though we’ve received guidance (read: prodding) from technology experts already.

The genie is out of the bottle. We must find a way to coax it back into its confines.

~ 1 ~

SALT MINES: On a lighter note, molten salt may become a cheaper means to reserve energy collected by alternative non-fossil fuel systems. Grist magazine wrote about Alphabet’s X research lab exploring salt as a rechargeable battery as an alternative to the much more expensive current lithium battery systems. Lithium as well as cobalt have challenges not unlike other extractive fuels; they aren’t widely and cheaply available and require both extensive labor and water for processing. Salt — sodium chloride — is far more plentiful and less taxing on the environment when extracted or collected.

One opportunity came to mind as soon as I read the article. Did you know there was a salt mine 1200 feet below the city of Detroit for decades? It’s a source of road salt used on icy roads. It may also be the perfect place for a molten salt battery system; the Grist article said, “Electricity in the system is produced most efficiently when there is a wider temperature difference between the hot and cold vats.” A salt mine underneath Detroit seems like it could fit the bill.

Could Detroit become an Electric Motor City? Fingers crossed.

~ 0 ~

I feel for you folks in states with cloud cover — no good excuse today to take a break outside and slack off beneath the eclipse.

This is an open thread.

The Future of Work Part 4: The Kinds Of Jobs That Are At Risk

Recent improvements in hardware, a massive increase in the number of processors available, and new math tools have increased concerns that computers may soon replace millions of workers. The shorthand for this is Artificial Intelligence, although the term seems like hyperbole considering the kinds of things computers can do at present. The Obama White House issued a paper on this issue, Artificial Intelligence, Automation and the Economy, which can be found here. It cites two studies of the impact of AI on automation over then next 10 years or so. One, by the OECD, estimates about 9% of US jobs may be lost to automation. The other is a more interesting 2013 paper by two professors at Oxford, Carl Benedikt Frey and Michael A. Osborne, estimating that as many as 49% of US jobs could be lost or seriously affected over 10 or so years.

The Frey-Osborne Paper is here. Frey is a professor in a public policy college, and Osborne is in the engineering college; they aren’t economists. Perhaps for that reason, the introductory sections are instructive on the history of technological change and some of its effects on society. The technical approach of the Frey-Osborne Paper is to identify the bottlenecks that make it difficult to automate the tasks needed in a specific job. They use machine learning to identify patterns in the skills needed by specific jobs.

The authors identify three main bottlenecks to automation:

1. Tasks requiring perception and manipulation. P. 24
2. Tasks requiring creative intelligence. P. 25
3. Tasks requiring social intelligence. P. 26

The O-NET database of jobs is managed by the US Department of Labor. The current version contains detailed descriptions of job tasks for 903 occupations. Here are the top eight tasks of 21 listed for forest firefighter, one of the bright future jobs according to O-NET,:

Rescue fire victims, and administer emergency medical aid.

Establish water supplies, connect hoses, and direct water onto fires.

Patrol burned areas after fires to locate and eliminate hot spots that may restart fires.

Inform and educate the public about fire prevention.

Participate in physical training to maintain high levels of physical fitness.

Orient self in relation to fire, using compass and map, and collect supplies and equipment dropped by parachute.

Fell trees, cut and clear brush, and dig trenches to create firelines, using axes, chainsaws or shovels.

Maintain knowledge of current firefighting practices by participating in drills and by attending seminars, conventions, and conferences.

Frey and Osborne describe their methodology as follows:

First, together with a group of [machine learning] researchers, we subjectively hand-labelled 70 occupations, assigning 1 if automatable, and 0 if not. For our subjective assessments, we draw upon a workshop held at the Oxford University Engineering Sciences Department, examining the automatability of a wide range of tasks. Our label assignments were based on eyeballing the O-NET tasks and job description of each occupation.

They identified nine variables related to the three bottlenecks and assigned levels of difficulty of the variables in carrying out each task, high, medium, or low. Then they verified their data, and used it as training data in a machine learning program. The paper gives a description of the way they prepared and ran the rest of the O-NET data through the trained machine to estimate the likelihood that each job would be automated over the next 10 years or so. They produced a chart showing the likely effects of AI on categories of jobs. The following chart shows the results of their work.

The authors say that large numbers of transportation and logistics workers, office workers and administrative support workers are at risk. They also think many service workers are at risk as robots become more efficient. They think people whose jobs require great manual dexterity and perception, or high levels of creativity, or strong social intelligence are reasonably safe in the near term. They assert that low-skill workers will have to move to jobs in the service sector that require these skills, and will have to sharpen their own through training and education.

There have been several articles on this issue lately. This one by Reuters says that investors think the future is in automation. Since the election shares in companies working in that area are up dramatically as is an ETF in the sector. Reuters says that this means that investors think that Trump’s assertion he will increase jobs in the manufacturing sector will not happen. Instead, as the cost of advanced technology drops labor becomes expendable. Any increase in manufacturing will have little effect on overall unemployment, as displaced workers move to other jobs with the same employers doing “value-added” tasks.

Matthew Yglesias goes a step farther in this 2015 post at Vox. He says the big problem in job growth in the US is the lack of increase in productivity due to inadequate automation. He thinks rising productivity is essential to higher wages, or more likely a reduction in the time spent working. Yglesias lays out the case for not worrying. He ignores, as all economists do, the possibility that the returns from work might be shared more equitably between capital and labor. His relentless optimism contrasts with the lived experience of millions of Americans, the real lives that gave us Trumpism.

I wonder what Yglesias makes of this article in the Guardian discussing the efforts of the billionaire Ray Dalio to create software to manage the day-to-day operations of the world’s largest hedge fund in accordance with “… a set of principles laid out by Dalio about the company vision.” The article provides a more pessimistic view of the future even for management work.

I don’t have an opinion about these forecasts or the reasoning behind them. Yglesias says people will work less, but doesn’t explain how workers who have no bargaining power will be able to increase their income enough to have free time. Dalio must think that he is so wise that his AI automaton will replicate his success forever, and that his competitors won’t take advantage of the rigidity of his principles.

Suppose that the investors described by Reuters are right, that manufacturing increases but without increased employment in the sector. What will all those Trump voters do next? Change their minds about what they want from the economy and the government that fosters it, and live happily ever after?

I think both Yglesias and Dalio are so steeped in neoliberal economics with its model of human beings as Homo Economicus that they assume these changes will come about smoothly. Nothing else will change; there are no dynamic tipping points. No large number of human beings will raise hell. There will be no feedback effects. The displaced of all ages will just retrain to some other job and/or resign themselves to their reduced lives. They won’t resist, or riot, or insist on government protection, or demand a completely new system. Investment bankers will blandly accept the judgment of computers as to their value and will not insist on being treated like superstars even if the machine says they are just gas giants.

Yglesias and Dalio are wrong. That is precisely what history says won’t happen.

The The Future of Work Part 3: An Example of Artificial Intelligence

We don’t have a clear definition of artificial intelligence, but we have some examples. One is machine translation, the subject of an article in the New York Times Magazine recently, The Great A.I. Awakening by Gideon Lewis-Krause (the”AI Article”). It’s a beautiful piece of science writing. The author had the opportunity to see how employees of Google developed a neural network machine translation system and implemented it. It’s long, but I highly recommend it. Rather than try to summarize it, I will draw out a few points.

The idea of neural net systems was inspired by our current understanding of the way the human brain works. There about 100 billion neurons in the average brain at birth. As we age, connections among neurons increase, so that each can be connected to as many as 10,000 other neurons. Thus, there are trillions of possible connections. Many of these are pruned as we age because they are not used. Many of the remaining connections are used to maintain the body, and to manage specific human processes, like the endocrine system, or to monitor for balance and pain.

One way to think about AI processes is to see them as pattern-matching systems. Until recently we didn’t have the processing power to handle even a tiny fraction of the brain’s connections, so the early efforts at simulating the brain were bound to fail. On the other hand, computers have been used for the purposes of matching patterns in relatively small sets of data. Here’s a technical example. One of the main lessons of the AI Article is that it takes massive amounts of processing power and massive amounts of data to begin to approach the connective power of the brain, Google also needed new mathematical theories to make it possible.

The astonishing thing is that the number of people needed to create those theories and do a preliminary setup is so small: maybe 10 all told. The full implementation required a team of 100 or so. More people were needed to create a new chip and get it working, and to install the new processors into the Google system, but again, the number seems to be in the hundreds, and it isn’t clear that there were that many new jobs.

The task was made easier by the fact that Google had a huge library of documents translated between languages. These served as training materials for the translation project. Google has a huge library of images, youtubes, and other materials suitable for training. There won’t be many jobs created in this area either.

These are two of the categories of new jobs identified by the White House in the report discussed here. There don’t seem to be many new openings in new fields, but who knows. And there is nothing here likely to create jobs for anyone but the most educated people, though, of course, there may be jobs created in related fields.

The new platform created by these small teams can be adapted for many different problems. Doubtless as those ramp up there will be some new jobs, but it seems unlikely that there will be a hiring burst. Instead, we will see a war of dollars as the big tech companies compete world-wide for the top talent. The AI Article says that the Google team includes people from around the world. We get one or two of the personal stories, and they are amazing.

The AI Article gives a good introduction to the way neural networks work. I caution readers that these parts are metaphorical, and it is unlikely to be useful to try to try to reason with those metaphors, either to extend them, or to make predictions about the future. The metaphor is not the thing. It is merely an aid to understanding the thing for those with little or no background. I link to a couple of pieces here that can be used to gain a deeper understanding of neural networks and deep learning.

At the end of the AI Article, there is a discussion of two possible ways of understanding consciousness. One view sees consciousness as something special beyond the mere physical actions of the brain. It finds it’s origins in the mind-body dualism of Descartes, and is a disparagingly referred to as The Ghost in the Machine. Religious people might see it as the soul, or the Atman; but I’m not sure that’s right. The other view dissolves this problem, and sees consciousness as an emergent phenomenon that arises from the complexity of the connectivity in the brain. The AI Article doesn’t go into this area in much detail.

And yet the rise of machine learning makes it more difficult for us to carve out a special place for us. If you believe, with Searle, that there is something special about human “insight,” you can draw a clear line that separates the human from the automated. If you agree with Searle’s antagonists, you can’t. It is understandable why so many people cling fast to the former view.

For those interested in pursuing this matter, see Consciousness Explained, by Daniel Dennett. The linked Wikipedia article gives a brief description of the book along with Searle’s objections to it.

I don’t know enough to have an opinion about any of this, but I hope other people are thinking about one aspect of this problem. In Western Liberalism, it is a given that there is something special about human beings, and about each of us individually. I don’t know how much of that arises from Christianity, with its emphasis on the relation between, and the likeness of, each individual to the Creator. There is something bound to be unnerving in the combination of a) the idea that our individual selves are just complications of our individual brains, and b) our increasing ability to model that complication in our electronic gear. I don’t have any immediate apocalyptic idea about this, not least for the reasons in this presentation. But every new idea about human beings has been twisted by despots and demagogues for their own purposes. It’s danagerous to pretend that isn’t going to happen with these ideas.

The Future of Work Part 2: The View From the White House

Top advisors in the Obama Administration published a report titled Artificial Intelligence, Automation, and the Economy in December 2016, which I will call the AI Paper. It’s a statement of the views of the Council of Economic Advisers, the Domestic Policy Council, the Office of Science and Technology Policy, the National Economic Council, and the US Chief Technology Officer, combining their views into a single report. There is a brief Executive Summary which gives a decent overview of the substance of the report, followed by a section on the economics of artificial intelligence technology and a set of policy recommendations. It’s about what you’d expect from a committee, weak wording and plenty of caveats, but there are nuggets worth thinking about.

First, it would be nice to have a definition of artificial intelligence. There isn’t one in this report, but it references an earlier report; Preparing For the Future of Artificial Intelligence, which dances around the issue in several paragraphs. Most of the definitions are operational: they describe the way a particular type of AI might work. But these are all different, just as neural network machine learning is different from rules-based expert systems. So we wind up with this:

This diversity of AI problems and solutions, and the foundation of AI in human evaluation of the performance and accuracy of algorithms, makes it difficult to clearly define a bright-line distinction between what constitutes AI and what does not. For example, many techniques used to analyze large volumes of data were developed by AI researchers and are now identified as “Big Data” algorithms and systems. In some cases, opinion may shift, meaning that a problem is considered as requiring AI before it has been solved, but once a solution is well known it is considered routine data processing. Although the boundaries of AI can be uncertain and have tended to shift over time, what is important is that a core objective of AI research and applications over the years has been to automate or replicate intelligent behavior. P. 7.

That’s circular, of course. For the moment let’s use an example instead of a definition: machine translation from one language to another, as described in this New York Times Magazine article. The article sets up the problem of translation and the use of neural network machine learning to improve previous rule-based solutions. For more on neural network theory, see this online version of Deep Learning by Ian Goodfellow and Yoshua Bengio and Aaron Courville. H/T Zach. The introduction may prove helpful in understanding the basics of the technology better than the NYT magazine article. It explains the origin of the term “neural network” and the reason for its replacement by the term “deep learning”. It also introduces the meat on the skeletal metaphor of layers as used in the NYT magazine article.

The first section of theAI Paper takes up the economic impact of artificial intelligence. Generally it argues that to the extent it improves productivity it will have positive effects, because it decreases the need for human labor input for the same or higher levels of output. This kind of statement is an example of what Karl Polanyi calls labor as a fictitious commodity. The AI Paper tells us that productivity has dropped over the last decade. That’s because, they say, there has been a slowdown in capital investment, and a slowdown in technological change. Apparently to the writers, these are unconnected, but of course they are connected in several indirect ways. The writers argue that improvements in AI might help increase productivity, and thus enable workers to “negotiate for the benefits of their increased productivity, as discussed below.” P. 10.

The AI Paper then turns to a discussion of the history of technological change, beginning with the Industrial Revolution. We learn that it was good on average, but lousy for many who lost jobs. It was also lousy for those killed or maimed working at the new jobs and for those marginalized, wounded and killed by government and private armies for daring to demand fair treatment. These are presumably categorized as “market adjustments”, which, according to the AI Paper, “can prove difficult to navigate for many.” P. 12 Recent economic papers show that Wages for those affected by these market adjustments never recover, and we can blame the workers for that: “These results suggest that for many displaced workers there appears to be a deterioration in their ability either to match their current skills to, or retrain for, new, in-demand jobs.” Id.

The AI Paper then takes up some of the possible results of improvements in AI technology. Job losses among the poorest paid employees are likely to be high, and wages for those still employed will be kept low by high unemployment. Jobs requiring less education are likely to be lost, while those requiring more education are likely safer, though certainly not absolutely safe. The main example is self-driving vehicles. Here’s their chart showing the potential for driving jobs that might be lost.

That doesn’t include any knock-on job losses, like reductions in hiring at roadside restaurants or dispatchers.

It also doesn’t include the possible new jobs that AI might create. These are described on pp 18-9. Some are in AI itself, though as the NYT magazine article shows, it doesn’t seem like there will be many. Some new jobs will be created because AI increases productivity of other workers. Some are in new fields related to handling AI and robots. That doesn’t sound like jobs for high school grads. Most of the jobs have to do with replacing infrastructure to make AI work. Here’s Dave Dayen’s description of the need to rebuild all streets and highways so autonomous vehicles can work. Maybe all those displaced 45 year old truck drivers can get a job painting stripes on the new roads. There are no numerical estimates of these new jobs.

The bad news is buried in Box 2, p. 20. Unless there are major policy changes, it’s likely that most of the wealth will be distributed to the rich. And then there’s this:

In theory, AI-driven automation might involve more than temporary disruptions in labor markets and drastically reduce the need for workers. If there is no need for extensive human labor in the production process, society as a whole may need to find an alternative approach to resource allocation other than compensation for labor, requiring a fundamental shift in the way economies are organized.

That certainly opens a new range of issues.

Update: the link to the AI Paper has been updated.

The Future of Work Part 1: John Maynard Keynes

As the global depression spiraled towards its depths in 1930, John Maynard Keynes wrote a cheerful article on the future of work: Economic Possibilities for our Grandchildren. He argued that it wouldn’t be too long before capital accumulation and technological change would come near to solving the economic problem of material subsistence, of producing enough goods and services to provide everyone with the necessities of life and largely relieving them of the burden of work.

The paper begins with a very brief description of the problems of the time:

We are suffering, not from the rheumatics of old age, but from the growing-pains of over-rapid changes, from the painfulness of readjustment between one economic period and another. The increase of technical efficiency has been taking place faster than we can deal with the problem of labour absorption; the improvement in the standard of life has been a little too quick; the banking and monetary system of the world has been preventing the rate of interest from falling as fast as equilibrium requires.

This statement anticipates the views of Karl Polanyi in The Great Transformation, and of Hannah Arendt in The Origins of Totalitarianism. They argue persuasively that massive technological changes led to changes in social structures which were profoundly upsetting to large numbers of people. Polanyi says that a decent society would take steps to relieve people of these stresses, perhaps by forcing a slower pace of change, or perhaps by legislation to protect the masses. Arendt claims that for a while, imperialism offered a solution by absorbing some of the excess workers. Both believed that the stresses of constant change and displacement of workers played an important role in the rise of fascism.

Keynes then points out the history of growth in world output. From the earliest time of which we have records, he says, to the early 1700s, there was little or no change in the standard of life of the average man. There were periods of increase and decrease, but the average was well under .5%, and never more than 1% in any period. The things available at the end of that period are not much different from those available at the beginning. He argues that growth began to accelerate when capital began to accumulate, around 1700.

It’s interesting to note that this sketch of economic history accords nicely with that provided by Thomas Piketty in Capital In The Twenty-First Century. This is Piketty’s Table 2.5. Compare this with Figure 2.4, The growth rate of world per capita output since Antiquity until 2100.

Keynes argues that since 1700 there has been a great improvement in the lives of most people, and there is every reason to think that will continue. Certainly there was the then current problem of technological unemployment, with technology displacing people faster than the it was creating new jobs. But he says it is reasonable to think that in 100 years, by 2030, people will be 8 times better off, absent war and other factors. He says there are two kinds of needs, those that are absolute, and those with the sole function of making us feel superior to others. The latter may be insatiable, he says, but the former aren’t, and we are getting closer to satisfying them. In so doing, we are getting close to solving the ancient economic problem: the struggle for subsistence.

That problem is indeed ancient. It shows up in Genesis, 3:17. Adam and Eve have eaten the fruit of the Tree of Knowledge of Good and Evil, and the Almighty punishes Adam with these words:

To Adam he said, “Because you listened to your wife and ate fruit from the tree about which I commanded you, ‘You must not eat from it,’ “Cursed is the ground because of you; through painful toil you will eat food from it all the days of your life.

To be relieved of this ancient curse should be a wonderful thing. Keynes doesn’t think it will be an easy transition though. The struggle for subsistence is replaced by a new problem: how to use the new freedom, how to use the new-found leisure. He thinks people will have to have some work, at least at first, to give us time as a species to learn to enjoy leisure. He thinks that those driven to make tons of money will be seen once again in moral terms: as committing the sin of Avarice. They will be ignored or controlled in the interests of the rest of us.

As it turns out, this wasn’t one of Keynes’ better predictions. It isn’t clear that there is such a thing as a minimum absolute needs, for example, and technology has not yet removed the need for all work. Still, the goal of solving the economic problem seems sensible, and his discussion of the problems of a possible transition seems accurate.

People want to work, and they want everyone else to work too. There have been a number of reported interviews with Trump voters, many of who claim that this has become a give-away society. People complain that it pays better to be out of work than in work because of all the free stuff you get, health care (Medicare), free phones, food stamps, SSDI, free housing and so on, so they voted for Trump thinking he’d fix it so that only the deserving poor would get that free stuff. They think people don’t want to work, which feels like projection, and if they have to work, everyone should. Work has a number of social benefits, including a sense of purpose, responsibility, and pride. How are these to be handled in Keynes’ Eden?

The pace of technological change has picked up. It not only affects blue-collar workers, it’s starting to hit on doctors, lawyers and even translators. Here’s an article on improvements in translation based on neural network machine learning from the New York Times Magazine; and here’s a report from the White House on the impact of artificial intelligence on jobs. And here’s an article in the NYT’s Upshot column discussing the White House Report, and a rebuttal from Dean Baker.

These problems are crucial to the future of democracy. They concern the nature of our institutions and our social structures, as well as questions about our nature as human beings. I’ll take these up in more detail in future posts in this series.

Update: Here’s a link to the Keynes paper discussed in this post.

Monday: A Border Too Far

In this roundup: Turkey, pipelines, and a border not meant to be crossed.

It’s nearly the end of the final Monday of 2016’s General Election campaign season. This shit show is nearly over. Thank every greater power in the universe we made it this far through these cumulative horrors.

Speaking of horrors, this Monday’s movie short is just that — a simple horror film, complete with plenty of bloody gritty gore. Rating on it is mature, not for any adult content but for its violence. The film is about illegal immigrants who want more from life, but it plays with the concepts of alien identity and zombie-ism. Who are the illegals, the aliens, the zombies? What is the nature of the predator and their prey? Does a rational explanation for the existence of the monstrous legitimize the horror they perpetuate in any way?

The logline for this film includes an even shorter tag line: Some borders aren’t meant to be crossed. This is worth meditating on after the horrors we’ve seen this past six months. Immigrants and refugees aren’t the monsters. And women aren’t feeble creatures to be marginalized and counted out.

Should also point out this film’s production team is mostly Latin American. This is the near-future of American storytelling and film. I can’t wait for more.

Tough Turkey
The situation in Turkey is extremely challenging, requiring diplomacy a certain Cheeto-headed candidate is not up to handling and will screw up if he places his own interests ahead of that of the U.S. and the rest of the world.

  • Luxembourg’s foreign minister compares Erdoğan’s purge to Nazi Germany (Deutsche Welle) — Yeah, I can’t argue with this when a political party representing an ethnic minority and a group sharing religious dogma are targeted for removal from jobs, arrest and detention.
  • Op-Ed: Erdoğan targeting critics of all kinds (Guardian) — Yup. Media, judges, teachers, persons of Kurdish heritage or Gulenist religious bent, secularists, you name it. Power consolidation in progress. Democracy, my left foot.
  • HDP boycotts Turkish parliament after the arrest of its leaders (BBC) — Erdoğan claimed the arrested HDP leaders were in cahoot with the PKK, a Kurdish group identified as a terrorist organization. You’ll recall HDP represents much of Turkey’s Kurdish minority. But Erdoğan also said he doesn’t care if the EU calls him a dictator; he said the EU abets terrorism. Sure. Tell the cities of Paris and Brussels that one. Think Erdoğan has been taking notes from Trump.
  • U.S. and Turkish military leaders meet to work out Kurd-led ops against ISIS (Guardian) — Awkward. Turkish military officials were still tetchy about an arrangement in which Kurdish forces would act against ISIS in Raqqa, Syria, about 100 miles east of Aleppo. The People’s Protection Units (YPG) militia — the Kurdish forces — will work in concert with Arab members of Syrian Democratic Forces (SDF) coalition in Raqqa to remove ISIS. Initial blame aimed at the PKK for a car bomb after HDP members were arrested heightened existing tensions between Erdoğan loyalists and the Kurds, though ISIS later took responsibility for the deadly blast. Depending on whose take one reads, the Arab part of SDF will lead the effort versus any Kurdish forces. Turkey attacked YPG forces back in August while YPG and Turkey were both supposed to be routing ISIS.

In the background behind Erdoğan’s moves to consolidate power under the Turkish presidency and the fight to eliminate ISIS from Syria and neighboring territory, there is a struggle for control of oil and gas moving through or by Turkey.

Russia lost considerable revenue after oil prices crashed in 2014. A weak ruble has helped but to replace lost revenue based on oil’s price, Russia has increased output to record levels. Increase supply only reduces price, especially when Saudi Arabia, OPEC producers, and Iran cannot agree upon and implement a production limit. If Russia will not likewise agree to production curbs, oil prices will remain low and Russia’s revenues will continue to flag.

Increasing pipelines for both oil and gas could bolster revenues, however. Russia can literally throttle supply near its end of hydrocarbon pipelines and force buyers in the EU and everywhere in between to pay higher rates — the history of Ukrainian-Russian pipeline disputes demonstrates this strategy. Bypassing Ukraine altogether would help Russia avoid both established rates and conflict there with the west. The opportunities encourage Putin to deal with Erdoğan, renormalizing relations after Turkey shot down a Russian jet last November. Russia and Turkey had met in summer of 2015 to discuss a new gas pipeline; they’ve now met again in August and in October to return to plans for funding the same pipeline.

A previous pipeline ‘war’ between Russia and the west ended in late 2014. This conflict may only have been paused, though. Between Russia’s pressure to sell more hydrocarbons to the EU, threats to pipelines from PKK-attributed terrorism and ISIS warfare near Turkey’s southwestern border, and implications that Erdoğan has been involved in ISIS’ sales of oil to the EU, Erdoğan may be willing to drop pursuit of EU membership to gain more internal control and profit from Russia’s desire for more hydrocarbon revenues. In the middle of all this mess, Erdoğan has expressed a desire to reinstate the death penalty for alleged coup plotters and dissenters — a border too far for EU membership since death penalty is not permitted by EU law.

This situation requires far more diplomatic skill than certain presidential candidates will be able to muster. Certainly not from a candidate who doesn’t know what Aleppo is, and certainly not from a candidate who thinks he is the only solution to every problem.

Cybery miscellany

That’s it for now. I’ll put up an open thread dedicated to all things election in the morning. Brace yourselves.

Wednesday: Big Wheels Turning

Hard to believe this was made in 1982. Yeah, the production quality doesn’t match today’s digital capabilities, but the story itself seems really prescient. How can an ethically-compromised bloviating bigot manage to fumble his way into office?

Now you know. Bet you can even offer constructive feedback on how director Danny DeVito could update this script for today’s social media-enhanced election cycle.

Self-Driving Vehicles

  • NHTSA issues guidelines for self-driving cars (Detroit Free Press) — FINALLY. But is it a bit too late now that Uber already has a fleet on the streets of Pittsburgh and Tesla has been running beta cars? Let’s face it: the federal government has been very slow to acknowledge the rise of artificial intelligence in any field, let alone the risks inherent in computer programming used in vehicles. We’re literally at the end of a two-term presidency, on the cusp of entirely new policies toward transportation, and NOW the NHTSA steps in? We need to demand better and faster rather than this future-shocked laggy response from government — and that goes for Congress as well as the White House. Congress fails to see the importance of early regulation in spite of adequate warning:

    Legislators warned automakers at the 15 March Senate hearing that the governing body took a dim view of the industry’s ability to self-regulate. “Someone is going to die in this technology,” Duke University roboticist Missy Cummings told the US Senate during a tense hearing where she testified alongside representatives from General Motors and Delphi Automotive, among others.

    Senators Ed Markey and Richard Blumenthal, who questioned car executives at the hearing, had cosponsored a 2015 bill to regulate self-driving automobiles. The bill was referred to committee and never returned to the floor. [source: Guardian]

    In the mean time, we have an initial 15-point guideline the NHTSA wants to address; are they enough? Is a guideline enough? Witness Volkswagen’s years-long fraud, flouting laws; without more serious consequences, would a company with Volkswagen’s ethics pay any heed at all to mere guidelines? Are you ready to drive on the road with nothing but non-binding guidelines to hold makers of autonomous cars accountable?

  • Multiple Tesla car models hackable (Keen Security Lab) — Check this video on YouTube. At first this seems like an innocuous problem, just lights, mirrors, door locks…and then * boom * the brakes while driving. These same functions would also be controlled by AI in a self-driving car, by the way, and they’re already on the road. This is exactly what I mean by the feds being slow to acknowledge AI’s rise.
  • ‘OMG COOL’-like impressions from early self-driving Uber passengers (Pittsburgh Post-Gazette) — Criminy. The naïveté is astonishing. Of course this technology seems so safe and techno-cool when you have an Uber engineer and programmer along for the ride, offering the illusion of safety. Like having a seasoned, licensed taxi driver. Why not just pay for an actual human to drive?
  • Tesla caught in back-and-forth with Mobileye (multiple sources) — After analyzing the May 2016 fatal accident in Florida involving Tesla’s semi-autonomous driving system, Tesla tweaked the system. The gist of the fatal accident appears to have been a false-positive misinterpretation of the semi-trailer as an overhead road sign, for which a vehicle would not slow down. But this particular accident alone didn’t set off a dispute between Tesla and the vendor for its Autopilot system, Mobileye. Another fatal accident in China which occurred in January was blamed on Tesla’s Autopilot — but that, too, was not the point of conflict between Tesla and its vendor. Mobileye apparently took issue with Tesla over “hands on” versus “hands-free” operation; the computer vision manufacturer’s 16-SEP press release claims Tesla said the Autopilot system would be hands on but was rolled out in 2015 as hands-free. Mobileye may also have taken issue with how aggressively Tesla was pursuing its own computer vision technology even before the two companies agreed to end their relationship this past July.  A volley of news stories over the last two weeks suggest there’s more going on than the hands on versus hands-free issue. Interestingly enough, the burst of stories began just after a hacker discovered there’s a previously undisclosed dash cam capturing shots of Tesla vehicle operations — and yet only a very small number of the flurry of stories mentioned this development. Hmm. Unfortunately, the dash cam feature would not have captured snaps for the two known fatal accidents because the nature of the accidents prevented the camera from sending images to Tesla servers.

Artificial Intelligence

  • The fall of humans is upon us with our help (Forbes) — this article asks what happens when white collar jobs are replaced by artificial intelligence. Oh, how nice, Forbes, that you worry about the white collar dudes like yourselves but not the blue collar workers already being replaced.How about discussing alternative employment for 3.5 million truck drivers?
    Or the approximately 230,000 taxi drivers?
    How about subway, streetcar, and tram operators (number of which I don’t currently have a number)?
    How about the administrative jobs supporting these workers?This is just a portion of transportation alone which will be affected by the introduction of AI in self-driving/autonomous vehicles. What about other blue collar jobs at risk — like fast food workers, of which there are 3.5 million? And we wonder why Trump appeals to a certain portion of the working class. He won’t be informed at all about this, will not have a solution except to remove persons of color as competition for employment. But the left must develop a cogent response to this risk immediately. It’s already here, the rise of machines as AI and algorithmic replacements for humans. Let’s not wait for the next Luddite rebellion V.2.0 — or is Trump’s current support the rebellion’s inception?
  • But every business needs AI! (Forbes) — Uh…no conflict here at all with the previous article. Nope. Just playing the refs. Save America, people, just keep buying!(By the way, note how this contributor touts Hello Barbie chatbot as a positive sign, though Mattel’s internet-enabled Barbie products have had some serious problems with security.)
  • The meta-threat of artificial intelligence (MIT Technology Review) — Doubt my opinion? Don’t take it from me, then, take it from experts including one who plans to make a fortune from AI — like Elon Musk.

Longread: Academia becomes the new white collar underclass
You may have noted Long Island University-Brooklyn’s 12-day lockout which was not really resolved last week but deferred by a contract extension. The dispute originated over a pay gap between Brooklyn and two other better paid LIU campuses. Ridiculous sticking point, given the small distance between these campuses LIU barred instructors from campus and halted their benefits during the lockout. Students walked out, infuriated by the temps who subbed in for the locked-out instructors — a cafeteria worker in one case filled in for an English instructor. LIU’s walkout won’t be the only such conflict over academic wages. To understand the scale of the problem, you’ll want to read this piece at Guernica, which explains how academia is being shaken down across the U.S., not just in Brooklyn. I remember asking an academic administrator back in 2006 what would happen when secondary education was commodified; they couldn’t imagine it ever happening. And now the future has arrived. What are we going to do about this while retaining U.S. standard in education?

Hope you’re liking the site revamp! Do leave a comment if you find anything isn’t working up to snuff.

‘Picking on’ Volkswagen: Why Follow Dieselgate?

[photo: macwagen via Flickr]

[photo: macwagen via Flickr]

One of our commenters described my attention to Dieselgate as ‘picking on’ Volkswagen. It’s not as if there haven’t been scandalous problems with other automotive industry manufacturers, like General Motors’ ignition switches or Takata’s airbag failures, right?

But Volkswagen earns greater attention here at this site because:

1) A critical mass of emptywheel readers are not familiar with the automotive industry, let alone manufacturing; they do not regularly follow automotive news. Quite a number are familiar with enterprise information security, but not car manufacturing or with passenger vehicle security. Many of the readers here are also in policy making, law enforcement, judiciary — persons who may influence outcomes at the very beginning or very end of the product manufacturing life cycle.

2) This is the first identified* multi-year incidence in which an automotive industry manufacturer using computer programming of a street-ready vehicle to defraud consumers and willfully violate multiple U.S. laws. This willfulness wholly separates the nature of this risk from other passenger vehicle vulnerabilities, ex: Fiat Chrysler’s hackable Uconnect dashboard computers or Nissan’s unprotected APIs for keyless remotes. (These latter events arose from inadequate info security awareness though responsiveness of vehicle manufacturers after notification may be in question.)

3) Volkswagen Group is the single largest passenger vehicle manufacturer in Europe. This isn’t a little deal considering half of all passenger vehicles in Europe are diesel-powered. Health and environmental damage in the U.S. from 600,000 passenger diesels has been bad enough; it’s taking lives in the tens of thousands across Europe. 75,000 premature deaths in 2012 alone were attributed to urban NO2 exposures, the source of which is diesel engines. It was testing in the U.S. against U.S. emissions standards which brought VW’s ‘cheating’ to light making it impossible for the EU to ignore any longer. The environmental damage from all Volkswagen passenger diesels combined isn’t localized; these additional non-compliant emissions exacerbate global climate change.

These are the reasons why Dieselgate deserved heightened scrutiny here to date — but the reasons why this scandal merits continued awareness have everything to do with an as-yet unrealized future.

We are on the cusp of a dramatic paradigm shift in transportation, driven in no small part by the need for reduced emissions. Development and implementation of battery-powered powertrains are tightly entwined with artificial intelligence development for self-driving cars. Pittsburgh PA is already a testing ground for a fleet of self-driving Uber vehicles; Michigan’s state senate seeks changes to the state’s vehicle code to permit self-driving cars to operate without a human driver to intervene.

All of this represents a paradigm shift in threats to the public on U.S. highways. Self-driving car makers and their AI partners claim self-driving vehicles will be safer than human-driven cars. We won’t know what the truth is for some time, whether AI will make better decisions than humans.

But new risks arise:

  • An entire line of vehicles can pose a threat if they are programmed to evade laws, ex: VW’s electronic control unit using proprietary code which could be manipulated before installation. (Intentional ‘defect’.)
  • An entire line of vehicles can be compromised if they have inherent vulnerabilities built into them, ex: Fiat Chrysler’s Uconnect dashboard computers. (Unintentional ‘defect’.)

Let’s ‘pick on’ another manufacturer for a moment: imagine every single Fiat Chrysler/Dodge/Jeep vehicle on the road in 5-10 years programmed to evade state and federal laws on emissions and diagnostic tests for road-worthiness. Imagine that same programming exploit used by criminals for other means. We’re no longer looking at a mere hundred thousand vehicles a year but millions, and the number of people at risk even greater.

The fear of robots is all hype, until one realizes some robots are on the road now, and in the very near future all vehicles will be robots. Robots are only as perfect as their makers.

An additional challenge posed by Volkswagen is its corporate culture and the deliberate use of a language barrier to frustrate fact-finding and obscure responsibility. Imagine now foreign transportation manufacturers not only using cultural barriers to hide their deliberate violation of laws, but masking the problems in their programming using the same techniques. Because of GM’s labyrinthine corporate bureaucracy, identifying the problems which contributed to the ignition switch scandal was difficult. Imagine how much more cumbersome it would be to tease out the roots if the entire corporate culture deliberately hid the source using culture, even into the coding language itself? Don’t take my word for how culture is used to this end — listen to a former VW employee who explains how VW’s management prevaricates on its ‘involvement’ in Dieselgate (video at 14:15-19:46).

Should we really wait for another five to 10 years to ‘pick on’ manufacturers of artificially intelligent vehicles — cars with the ability lie to us as much as their makers will? Or should we look very closely now at the nexus of transportation and programming where problems already occur, and create effective policy and enforcement for the road ahead?
_________
* A recent additional study suggests that Volkswagen Group is not the only passenger diesel manufacturer using emissions controls defeats.