Apr 24

Increase someones costs too much and they’ll pay attention – you might not like it

This writeup is in response to an article I just read about the “walk outs” at the fast food restaurants in NY and Chicago recently. In the articles I read there is so much glee about how these people are standing up for themselves and how it heartening to see these people as for what they “deserve”. Here’s another report on it. I will bet you that 90% of these people have no idea what actually goes in to running the company. They don’t care if the owner / operator had to take a second mortgage to make payroll last month due to a mad cow scare. They don’t care if a competing restaurant / shop opened next door and revenue just dropped 10%. They don’t see it and they don’t care. They just want “what they deserve” and they’re free to make up whatever value suits them. They think that what they say they “need” (which is typically how much they “want”… because we American’s are forgetful sometimes that there is a difference) is what they actually “deserve”. It’s simply not true.

If you’ve read my blog for any length of time you’ll quickly find that I’m all for making money and that I want people to do well. I want employees (people who work for other people) to be treated well, and I want business owners, the risk takers, to get as much return on their investment, yes, profit, as possible. If the risk takers can’t be profitable, then their incentive to actually take a risk disappears so its really in everyone’s best interest that profit, even “obscene profits”, continue to be possible.

So what about the employees? Should they get a cut of the profits? If someone works at Wendy’s should they be paid a percentage of what the restaurant makes instead of minimum wage? Should they be paid $15 an hour just because they feel like that is a living wage? I’m not saying that the don’t need to have a certain amount of income to pay their bills. They very well might, but that’s not their employer’s problem, that’s their problem.

As someone who is trying to pull his own business up by the boot straps I am well aware that an employee is often necessary. If I need a job done (or commonly several jobs done but I can only do one) it makes sense for me to find out who can do the other job(s). I’ll interview several candidates and determine which one I think can do the job the best, which one can do the job for the least money, which one will require the least “management”, which one I feel like I can trust, etc.  Then I try to hire the person that is the intersection of those items – or, more accurately, the best mix.

My job as the business owner is to make sure that my hire is profitable, right? I mean, it’s got to at least break even or it’s just throwing money away. Hell, breaking even isn’t all that great unless I build a nice salary in for myself rather than taking the profits home and living off of them. If I believe that hiring you will cause me to be more profitable then that’s what I’ll do. If you’re up against someone else and I believe that hiring them would make me more profitable than if I hired you then guess who I’m going to hire? Seriously, if YOU were running the business which one would you hire? The one that helped you make payroll and put a little aside for reserves for the lean months or the one that made it possible to keep treading water? (As an aside, this is why many people have jobs created FOR them by simply approaching the interview process the right way. If you can walk in the door and, instead of saying you need a job, say that you can help make me more profitable… you’ve got my ear… the same is true for any business owner)

For the sake of argument, let’s say the employees involved in these strikes do know something about the business and ownership financials. Let’s say they understand and all they really want is what they think will allow the company to “break even” and the owner to a “reasonable profit”. Well, as an investor, as a risk taker, I’ll tell you to take a hike if you make me an offer where I take risk and only you get the guaranteed reward. If I have to deal with people who take no risk, but who want to tell me the right way or make demands of me then then that actually ups the required profit incentive needed to get me to take a risk myself. The bigger the PITA (pain the the arse) the bigger the need for a bigger profit.

If we assume I’m in  business to make a profit then we’ve got a problem when my labor costs go up. That dips into profits, right? Well, then I’ve only got a few options – assuming eating the loss is not an option. Obviously I’d just close shop before I keep eating a loss if I saw no way to turn it around.

But how might I try to turn it around? I can lay off some workers and require the others to be more efficient so that labor costs go back to where they were. I can cut some other expense, maybe close earlier (so I won’t pay as much electricity and I won’t have to pay as many labor hours). None of those are good for my employees or my customers so maybe I need to find some to new revenue instead. Let’s see… I either need to add a new revenue stream (find something to sell that I wasn’t already selling) or I need to increase my prices on my existing product. Customers don’t usually like that, but they might put up with it up to a point, but eventually, they will not, and business will suffer.

There is another solution though. Robots. Laugh if you want, but I’m serious. Utilizing robots a business can keep prices where they are, and maybe even lower them! They can actually increase their hours of operation – in fact it would make sense to do so. The customers will be happy! But guess who won’t… that’s right, the employees that were so ecstatic about getting their raise. Guess who else won’t be happy, the tax collector who doesn’t get the income tax money, the social security tax money, the unemployment tax money, etc. They’d be wise to stay out of business… not make it more difficult.

Let me ask you this: Did you ever get a cable bill and go “WTF? Why did it go up another 10%? I’m not getting any more value so why are they charging me more?” This may result in “I’m switching to satellite!” or “I’m dropping my cable and I’ll just spend more time reading”.

Well, that’s how an employee that can be replaced with a robot looks eventually. The employer may not want to do it. It may actually be painful for the employer to make that change on a number of levels. But the labor is a significant cost that just got significant raised and that could be lowered. It’s a cost that in an ideal world would never have been allowed to get that high. The cost probably can’t be walked back down. There’d be too many hard feelings and too much animosity in the work place among people who were to immature to see what problems they were causing. You probably can’t just lower the cost… you have to get rid of it and then replace it.

The sad thing is that when the employee was making a lesser amount it wasn’t even worth the employers time to think about a costly and cumbersome replacement that would have major effects on the way the day to day business is operated. But when the employee demanded (and used pressure) to receive a raise that was too large it gave the business owner a kick in the rear. It woke the employer up and demanded of them to find a “better”, sustainable, affordable solution.

That’s where we’re heading I think. There are robots that make chinese noodles. There is increasingly good voice recognition software and apps that be can used to order at a drive through. There are robots that can fill drink cups. There are robots can make fries and burgers. All of these things will only become better AND cheaper. All the while, many of the people who currently do these jobs are becoming more demanding of higher wages and benefits. It might not have been worth it for some of these operations to look into robotic employees while wages were in line with costs, but when a significant change in costs takes place employers will shift their attention to finding cost savings.

These workers have done nothing but started the process of signing their own termination of employment letters. The bright side is that at least they will have helped create jobs for robotic techs and the slew of other jobs this growing industry will create! Here’s hoping that with the extra money these workers are demanding from their employers they are buying training (investing in themselves) to work in the field that takes over their own jobs that they caused to disappear.

Aug 28

Some thoughts on Software Patents

I have read a lot of what I consider to be fairly convincing arguments relating to software patents and whether the courts should “allow them” or “ban them”. Before I had the insight I’m going to share below I was definitely a fence hopper, but I think I have finally satisfied myself with an answer. It takes a wee bit of imagination and a willingness to be somewhat philosophical to get there, but I think the thought process will get you there.

 

If you’ve seen “The Matrix” you know that almost the entire movie involves real people living and acting in a virtual world. If you’ve seen “The Matrix Reloaded” you’ll remember a scene where a ship is returning to realworld city of Zion and they are getting ready to enter the gates to the city. The city is very mechanical and computers are utilized to control everything, but do the people of Zion who operate the computers sit behind a keyboard? No they don’t. Instead they “plug in” in the real world transferring they minds into the virtual world in which someone (presumably other humans) has programmed all of these controls. The controls cool thing about these controls is that not only can they be laid out like a keyboard, but they can also be like a lever. It’s their own virtual world so they can build it however they want. They just need to be presented with an environment that they can manipulate to get “the job” done.

 

Now imagine current earth humans being able to allow their mind to live or work inside of a virtual world. Everything in that virtual world is actually software! Nothing is physical though it could be designed to look it and feel it. It may be designed such that your physical actions (ie grabbing a lever and pulling / pushing it) causethe software to behave in different ways, but it’s still not actually physical. How you interact with that software simply causes some state change in the outside world, but what you are interacting with IS software. Yes, the changes you introduce by manipulating the controls made available to you will cause some other software to cause changes in the outside world which will in turn cause a change to the view of the world presented to the those in the virtual world (and in the physical world). But it’s still software making it all work. The tools are software. The connections are software. The actor could even be software.

 

Once these types of systems are possible, and especially once they are common place, there could be a rush of what, in the physical world of today, we would call innovative people coming up with new widgets that can be used inside of this virtual world. These innovations will almost surely come with a price that would be paid by the programmer. In that case there would be a need for protection under some type of law in order to encourage people to create, test, and perfect them. Do we have a system that provides this sort of protection today? Yes, we do, and it is the patent system. It would also be applicable to this sort of situation considering the new types of “tools” that people would “physically” interact with inside the virtual world. All manner of things are possible in the real world today that we just knew wasn’t possible before (until someone innovated a way to do it), and the same will be true in the virtual world. Ways of doing things never even thought of will be, given the right motivation, not only thought of but implemented and improved upon. Different ways of looking at problems will cause unique solutions to become apparent. The solutions would be “obvious” once pointed out, but would be nonobvious prior. Why would someone dedicate their time to looking for alternate solutions if the answer will net them no reward? History shows us that they won’t… not to the same degree anyway.

 

Q: What about a hammer vs a “virtual hammer”? Would you really allow a patent on a virtual hammer that does the same thing in software world that it does in the real world? That seems like everything would get repatented with the only difference being that it is “in software”.

 

A: This question stems from one of the common errors untrained people make when judging patent validity. You can’t just look at the title, or the summary. Think about it. A software hammer wouldn’t be the same thing as a hardware hammer would it? Software doesn’t have physical nails to drive. But maybe a software hammer can be made such that it easy automates the binding of two or more components using a single connective module. Something that used to take 10 virtual actions can be easily rolled  up into the action of hitting the objects with a hammer. The hammer basically just does all of those steps that “physically” had to be done before and elminates them through some ingenious “piece of code”. Testing this peice of code and finding just the right tweaks for it came with a cost of thousands of lost operations (cpu cycles), mangled data, and even memory leaks that had to be dealt with before it became stable to be used in the virtual world. Why would someone give up these precious resources if it would not gain them some advantage? Now that it is done it is a easily copyable solution so what’s to stop another from copying it and using it without having put their own butts on the line? Copyright doesn’t do the trick as code can be rewritten (hell, translate it to another language and you’ll have to modify it to do so). You’re still using the same algorithm, but it obviously not the same code. Yes it is and you shouldn’t be allowed to steal the code, change the language, and call it new.

 

It is my belief that as things become more virtualized and as virtual reality starts to become both more real and more immersive that we will see more need for patents on things in the virtual world. These things are no doubt software. But they are also no doubt in need of protection.

 

To be continued… or is this one step too far?

 

And if we know that software should be patentable in the case of said eventual world, then software should be patentable now due to the simple fact that the simulation argument leads there.

 

May 30

Robots in War (2)

This is another article I read recently about robots in wars. It is more philosophical in nature so keep that in mind as you read. Please consult the full text of the article for more. I really like the anecdotes about the swiss army accidentally invading Liechtenstein and how a British Army Platoon accidentally invaded a Spanish beach. Would these problems be eliminated with a Robot army? See the article for an answer.

I will be trying to dig up the paper itself (Araro’s “How Just Could a Robot War Be?”) and will also be doing some research into exactly what defines a “Just War”. I’ll post my findings at a later date.

I’ve included two long excerpts from the article below. Enjoy, and please consult the original article as it is quite well done.

In a fascinating paper entitled “How Just Could a Robot War Be?”, philosopher Peter Asaro of Rutgers University explores a number of robot war scenarios.

Asaro imagines a situation in which a nation is taken over by robots — a sort of revolution or civil war. Would a third party nation have a just cause for interceding to prevent this?

Asaro concludes that the use of autonomous technologies such as robot soldiers is neither “completely morally acceptable nor completely morally unacceptable” according to the just war theory formulated by Michael Walzer.

Just war theory defines the principles underlying most of the international laws regulating warfare, including the Geneva and Hague Conventions. Walzer’s classic book Just and Unjust Wars was a standard text at the West Point Military Academy for many years, although it was recently removed from the required reading list.

Asaro asserts that robotic technology, like all military force, could be just or unjust, depending on the situation.

h+: We’re using semi-autonomous robots now in Iraq and, of course, we’ve been using smart bombs for some time now. What is the tipping point – at what point does a war become a “robot war”?

PETER ASARO: There are many kinds of technologies being used already by the U.S. military, and I think it is quite easy to see the U.S. military as being a technological system. I wouldn’t call it robotic yet, though, as I think there is something important about having a “human-in-the-loop,” even if the military is trying to train soldiers to behave “robotically” and follow orders without question.

I think there is always a chance that a soldier will question a bad order, even if they are trained not to, and there is a lot of pressure on them to obey.

Ron Arkin is a roboticist at Georgia Tech who has designed an architecture for lethal robots that allows them to question their orders. He thinks we can actually make robots super-moral, and thereby reduce civilian casualties and war crimes.

I think Ron has made a good start on the kinds of technological design that might make this possible. The real technical and practical challenges are in properly identifying soldiers and civilians.

The criteria for doing this are obscure, and humans often make mistakes because information is ambiguous, incomplete, and uncertain. A robot and its computer might be able to do what is optimal in such a situation, but that might not be much better than what humans can do.

More importantly, human soldiers have the capacity to understand complex social situations, even if they often make mistakes because of a lack of cultural understanding.

I think we are a long way from achieving this with a computer, which at best will be using simplified models and making numerous potentially hazardous assumptions about the people they are deciding whether or not to kill.

Also, while it would surely be better if no soldiers were killed, having the technological ability to fight a war without casualties would certainly make it easier to wage unjust and imperial wars. This is not the only constraint, but it is probably the strongest one in domestic U.S. politics of the past 40 years or so.

By the way, I see robots primarily as a way to reduce the number of soldiers needed to fight a war. I don’t see them improving the capabilities of the military, but rather just automating them. The military hold an ideal vision of itself as operating like a well-oiled machine, so it seems that it can be rationalized and automated and roboticized. The reality is that the [human] military is a complex socio-technical system, and the social structure does a lot of hidden work in regulating the system and making it work well. Eliminating it altogether holds a lot of hidden dangers.

h+: You talk about the notion that robots could have moral agency – – even superior moral agency –- to human soldiers. What military would build such a soldier? Wouldn’t such a solider be likely to start overruling the military commanders on policy decisions?

PA: I think there are varying degrees of moral agency, ranging from amoral agents to fully autonomous moral agents. Our current robots are between these extremes, though they definitely have the potential to improve.

I think we are now starting to see robots that are capable of taking morally significant actions, and we’re beginning to see the design of systems that choose these actions based on moral reasoning. In this sense, they are moral, but not really autonomous because they are not coming up with the morality themselves… or for themselves.

They are a long way from being Kantian moral agents –- like some humans –- who are asserting and engaging their moral autonomy through their moral deliberations and choices. [Philosopher Immanuel Kant’s “categorical imperative” is the standard of rationality from which moral requirements are derived.]

We might be able to design robotic soldiers that could be more ethical than human soldiers.

Robots might be better at distinguishing civilians from combatants; or at choosing targets with lower risk of collateral damage, or understanding the implications of their actions. Or they might even be programmed with cultural or linguistic knowledge that is impractical to train every human soldier to understand.

Ron Arkin thinks we can design machines like this. He also thinks that because robots can be programmed to be more inclined to self-sacrifice, they will also be able to avoid making overly hasty decisions without enough information. Ron also designed architecture for robots to override their orders when they see them as being in conflict with humanitarian laws or the rules of engagement. I think this is possible in principle, but only if we really invest time and effort into ensuring that robots really do act this way. So the question is how to get the military to do this.

It does seem like a hard sell to convince the military to build robots that might disobey orders. But they actually do tell soldiers to disobey illegal orders. The problem is that there are usually strong social and psychological pressures on soldiers to obey their commanders, so they usually carry them out anyway. The laws of war generally only hold commanders responsible for war crimes for this reason. For a killing in war to truly be just, then the one doing the killing must actually be on the just side in the war. In other words, the combatants do not have equal liability to be killed in war. For a robot to be really sure that any act of killing is just, it would first have to be sure that it was fighting for a just cause. It would have to question the nature of the war it is fighting in and it would need to understand international politics and so forth.

The robots would need to be more knowledgeable than most of the high school graduates who currently get recruited into the military. As long as the war is just and the orders are legal, then the robot would obey, otherwise it wouldn’t. I don’t think we are likely to see this capability in robots any time soon.

I do think that human soldiers are very concerned about morality and ethics, as they bear most of the moral burdens of war. They are worried about the public reaction as well, and want to be sure that there are systems in place to prevent tragic events that will outrage the public. It’s not impossible to try to control robot soldiers in this way. What we need is both the political will, and the technological design innovation to come together and shape a new set of international arms control agreements that ensures that all lethal robots will be required to have these types of ethical control systems.

Of course, there are also issues of proliferation, verification and enforcement for any such arms control strategy. There is also the problem of generating the political will for these controls. I think that robotic armies probably have the potential to change the geo-political balance of power in ways far more dramatic than nuclear arms.

We will have to come up with some very innovative strategies to contain and control them. I believe that it is very important that we are not naive about what the implications of developing robotic soldiers will mean for civil society.

May 30

Robots in War (1)

The idea of using robots in a war type environment has always been there. Atleast it has always been there for those in the “Terminator generation”. However, those in the Starwars generation would have seen C3PO or R2D2 blasting away with lasers. Before that I’m sure there were other examples. So suffice it to say robots in wars is not a new concept. It is however reality. Check out this article for some of the details. It’s a pretty cool read. Some excerpts are below.

This first excerpt covers an actual malfunction of such technology.

A few minutes before nine in the morning, and the young soldiers have no idea of the horror that is about to strike them. They are taking part in a massive military training exercise, involving 5,000 troops, and are about to showcase the latest in robotic weapons technology.

The MK5 anti-aircraft system, with two huge 35mm cannons, is essentially a vast robotic weapon, controlled by a computer.

But while it’s one thing when your laptop freezes up, it’s quite another when it is controlling an auto-loading magazine containing 500 high-explosive rounds.

As the display begins, the South African troops sense quickly that something is terribly wrong. The system appears to jam – but what happens next is truly chilling.

‘There was nowhere to hide,’ one witness stated in a report. ‘The rogue gun began firing wildly, spraying high explosive shells at a rate of 550 a minute, swinging around through 360 degrees like a high-pressure hose.’

One young female officer rushes forward to try to shut down the robotic gun – but it is too late.

‘She couldn’t, because the computer gremlin had taken over,’ the witness later said.

The rounds from the automated gun rip into her and she collapses to the ground. By the time the robot has emptied its magazine, nine soldiers lie dead (including the woman officer).

Another 14 are seriously injured. The report will later blame the bloodbath on a ‘software glitch’.

It sounds like a blood-spattered scene from the new blockbuster Terminator Salvation, in which a military computer takes over the world using an army of robot soldiers.

But this bloodbath actually happened. And concern is mounting that it may happen again and again, as a growing number of military robots flood the battlefield.

And this one talks about the various sizes and potential issues that could happen.

‘Just look at the numbers,’ he says. ‘We went into Iraq in 2003 with zero robots. Now we have 12,000 on the ground. They come in all shapes and sizes, from tiny machines to robots bigger than an 18-wheeler truck.

There are ones that fit on my little finger and ones with the wingspan of a football field.’

The U.S. military is the biggest investor in robot soldiers. Its robot programme, dubbed Future Combat Systems, is budgeted to spend $240 billion over the next 20 years.

But Singer is worried that in the rush to bring out ever more advanced systems, many lethal robots will be rolled out before they are ready.

It is a terrifying prospect. ‘Imagine a laptop armed with an M16 machine-gun,’ one expert said.

According to Noel Sharkey, a professor of robotics and artificial intelligence at Sheffield University, one of the biggest concerns is that this growing army of robots could stray out of communication range.

‘Just imagine a rogue robot roaming off the battlefield and into a nearby village,’ he says. ‘Without experts to shut it down, the results could be catastrophic.’

There are robots that can move through sand and water. There are robots that can hover, Robots that can fly. Humanoid robots. There are robots that can, utilize a machine gun with the accuracy of a sniper shooting an apple from hundreds of meters. These robots can be armed with grenade launchers, machine guns, and rocket launchers. They’re not so smart, but they are good at what they are told to do.

Of course, as with any weapon technology there is fear it could fall into the wrong hands (assuming it started in the right hands to begin with). There is also fear of the robots making mistakes. For example, it might misidentify something as a threat. Can robots be made to understand the rules of engagement? These are questions that have to be dealt with and their consequences understood. But make no mistake, we the world have proceeded down this path. It is happening and hopefully we can keep it under control or atleast stay ahead of the curve.

I will leave you with this final thought from the article.

‘Body bags containing real soldiers coming home affect the government electorally,’ says Sharkey. ‘Once you start using robots, you remove this problem.’

But do we really want going to war to be as easy, and impersonal, as playing a computer game?

Apr 28

Manna – A story of Automated management by computers

A couple of days ago I read a story by Marshall Brain. I think Slashdot referred me to it (the first sign it was probably going to be a quality read) and after the first section I was hooked. It is either seven or eight chapters long and I read them all in one sitting. The story is about some software called “Manna” that was created to help manage employees who did easily micromanaged tasks. The software was deployed in a fastfood resturant with much success. Customers were happy, workers were happy, things were efficient. The rest of the story tells of what happens when manna’s usage is applied to additional domains as its capabilities are enhanced. I will include chapter one below in hopes of getting you hooked as well. Go check out the original source and all the additional chapters if you like it.

Here’s the first part:

Depending on how you want to think about it, it was funny or inevitable or symbolic that the robotic takeover did not start at MIT, NASA, Microsoft or Ford. It started at a Burger-G restaurant in Cary, NC on May 17, 2010. It seemed like such a simple thing at the time, but May 17 marked a pivotal moment in human history.

Burger-G was a fast food chain that had come out of nowhere starting with its first restaurant in 2001. The Burger-G chain had an attitude and a style that said “hip” and “fun” to a wide swath of the American middle class. The chain was able to grow with surprising speed based on its popularity and the public persona of the young founder, Joe Garcia. By 2010 the chain had 1,000 outlets in the U.S. and showed no signs of slowing down. If the trend continued, Burger-G would soon be one of the “Top 5” fast food restaurants in the U.S.

The “robot” installed at this first Burger-G restaurant looked nothing like the robots of popular culture. It was not hominid like C-3PO or futuristic like R2-D2 or industrial like an assembly line robot. Instead it was simply a PC sitting in the back corner of the restaurant running a piece of software. The software was called “Manna”, version 1.0*.

Manna’s job was to manage the store, and it did this in a most interesting way. Think about a normal fast food restaurant circa 2000. There was a group of employees who worked at the store, typically 50 people in a normal restaurant who rotated in and out on a weekly schedule. The people did everything from making the burgers to taking the orders to cleaning the tables and taking out the trash. All of these employees reported to the store manager and a couple of assistant managers. The managers hired the employees, scheduled them and told them what to do each day. This was a completely normal arrangement. In 2000, there were millions of businesses that operated in this way.

Circa 2000, the fast food industry had a problem, and Burger-G was no different. The problem was the quality of the fast food experience. Some restaurants were run perfectly. They had courteous and thoughtful crew members, clean restrooms, great customer service and high accuracy on the orders. Other restaurants were chaotic and uncomfortable to customers. Since one bad experience could turn a customer off to an entire chain of restaurants, these poorly-managed stores were the Achilles heel of any chain.

To solve the problem, Burger-G contracted with a software consultant and commissioned a piece of software. The goal of the software was to replace the managers and tell the employees what to do in a more controllable way. Manna version 1.0 was born.

Manna was connected to the cash registers, so it knew how many people were flowing through the restaurant. The software could therefore predict with uncanny accuracy when the trash cans would fill up, the toilets would get dirty and the tables needed wiping down. The software was also attached to the time clock, so it knew who was working in the restaurant. Manna also had “help buttons” throughout the restaurant. Small signs on the buttons told customers to push them if they needed help or saw a problem. There was a button in the restroom that a customer could press if the restroom had a problem. There was a button on each trashcan. There was a button near each cash register, one in the kiddie area and so on. These buttons let customers give Manna a heads up when something went wrong.

At any given moment Manna had a list of things that it needed to do. There were orders coming in from the cash registers, so Manna directed employees to prepare those meals. There were also toilets to be scrubbed on a regular basis, floors to mop, tables to wipe, sidewalks to sweep, buns to defrost, inventory to rotate, windows to wash and so on. Manna kept track of the hundreds of tasks that needed to get done, and assigned each task to an employee one at a time.

Manna told employees what to do simply by talking to them. Employees each put on a headset when they punched in. Manna had a voice synthesizer, and with its synthesized voice Manna told everyone exactly what to do through their headsets. Constantly. Manna micro-managed minimum wage employees to create perfect performance.

The software would speak to the employees individually and tell each one exactly what to do. For example, “Bob, we need to load more patties. Please walk toward the freezer.”

Or, “Jane, when you are through with this customer, please close your register. Then we will clean the women’s restroom.”

And so on. The employees were told exactly what to do, and they did it quite happily. It was a major relief actually, because the software told them precisely what to do step by step.

For example, when Jane entered the restroom, Manna used a simple position tracking system built into her headset to know that she had arrived. Manna then told her the first step.

Manna: “Place the ‘wet floor’ warning cone outside the door please.”

When Jane completed the task, she would speak the word “OK” into her headset and Manna moved to the next step in the restroom cleaning procedure.

Manna: “Please block the door open with the door stop.”

Jane: “OK.”

Manna: “Please retrieve the bucket and mop from the supply closet.”

Jane: “OK.”

And so on.

Once the restroom was clean, Manna would direct Jane to put everything away. Manna would make sure that she carefully washed her hands. Then Manna would immediately start Jane working on a new task. Meanwhile, Manna might send Lisa to the restroom to inspect it and make sure that Jane had done a thorough job. Manna would ask Lisa to check the toilets, the floor, the sink and the mirrors. If Jane missed anything, Lisa would report it.

I grew up in Cary, NC. That was a long time ago, but when I was a kid I lived right in the middle of Cary with my parents. My father was a pilot for a big airline. My mother was a stay-at-home mom and I had a younger sister. We lived in a typical four bedroom suburban home in a nice neighborhood with a swimming pool in the backyard. I was a 15 year-old teenager working at the Burger-G on May 17 when the first Manna system came online.

I can remember putting on the headset for the first time and the computer talking to me and telling me what to do. It was creepy at first, but that feeling really only lasted a day or so. Then you were used to it, and the job really did get easier. Manna never pushed you around, never yelled at you. The girls liked it because Manna didn’t hit on them either. Manna simply asked you to do something, you did it, you said, “OK”, and Manna asked you to do the next step. Each step was easy. You could go through the whole day on autopilot, and Manna made sure that you were constantly doing something. At the end of the shift Manna always said the same thing. “You are done for today. Thank you for your help.” Then you took off your headset and put it back on the rack to recharge. The first few minutes off the headset were always disorienting — there had been this voice in your head telling you exactly what to do in minute detail for six or eight hours. You had to turn your brain back on to get out of the restaurant.

To me, Manna was OK. The job at Burger-G was mindless, and Manna made it easy by telling you exactly what to do. You could even get Manna to play music through your headphones, in the background. Manna had a set of “stations” that you could choose from. That was a bonus. And Manna kept you busy the entire day. Every single minute, you had something that Manna was telling you to do. If you simply turned off your brain and went with the flow of Manna, the day went by very fast.

My father, on the other hand, did not like Manna at all from the very first day he saw me wearing the headset in the restaurant. He and Mom had come in for lunch and to say hi. I knew they were coming, so I had timed my break so I could sit down with them for a few minutes. When I sat down, my father noticed the headset.

“So”, he said, “they have you working the drive-thru I see. Is that a step up or a step down?”

“It’s not the drive-thru,” I replied, “it’s a new system they’ve installed called Manna. It manages the store.”

“How so?”

“It tells me what to do through the headset.”

“Who, the manager?”

“No, it’s a computer.”

He looked at me for a long time, “A computer is telling you what to do on the job? What does the manager do?”

“The computer is the manager. Manna, manager, get it?”

“You mean that a computer is telling you what to do all day?”, he asked.

“Yeah.”

“Like what?”

I gave him an example, “Before you got here, I was taking out the trash. Manna told me how to do it.”

“What did it say?”

“It tells you exactly what to do. Like, It told me to get four new bags from the rack. When I did that it told me to go to trash can #1. Once I got there it told me to open the cabinet and pull out the trash can. Once I did that it told me to check the floor for any debris. Then it told me to tie up the bag and put it to the side, on the left. Then it told me to put a new bag in the can. Then it told me to attach the bag to the rim. Then it told me to put the can back in and close the cabinet. Then it told me to wipe down the cabinet and make sure it’s spotless. Then it told me to push the help button on the can to make sure it is working. Then it told me to move to trash can #2. Like that.”

He looked at me for a long time again before he said, “Good Lord, you are nothing but a piece of a robot. What is it saying to you now?”

“It just told me I have three minutes left on my break. And it told me to smile and say hello to the guests. How’s this? Hi!” And I gave him a big toothy grin.

“Yesterday the people controlled the computers. Now the computers control the people. You are the eyes and hands for this robot. And all so that Joe Garcia can make $20 million per year. Do you know what will happen if this spreads?”

“No, I don’t. And I think Mr. G makes more than $20 million a year. But right now I’ve got two minutes left, and Manna is telling me that I need to move back to station 3 to get ready for the next run. See ya.” I waved at Mom. Dad just stared at me.

The tests in our Burger-G store were surprisingly successful. There were Burger-G corporate guys in the restaurant watching us, fixing bugs in the software, making sure Manna was covering all the bases, and they were pleased. It took about 3 months to work all the kinks out, and as they did the Manna software totally changed the restaurant. Worker performance nearly doubled. So did customer satisfaction. So did the consistency of the customer’s experience. Trash cans never overfilled. Bathrooms were remarkably clean. Employees always washed their hands when they needed to. Food was ready faster. The meals we handed out were nearly 100 percent accurate because Manna made us check to make sure every item in the bag was exactly what the customer ordered. The store never ran out of supplies — there were always plenty of napkins in the dispenser and the ketchup container was always full. There were enough employees in the store for the busy times, because Manna could accurately track trends and staff appropriately.

In addition, Burger-G saved a ton of money. In 2010, Burger-G had just over 1,000 stores in the United States. Manna worked so well that Burger-G deployed it nationwide in 2011. By 2012 Burger-G had cut more than 3,000 of its higher-paid store employees — mostly assistant managers and managers. That one change saved the company nearly $100 million per year, and all that money came straight to the bottom line for the restaurant chain. Shareholders were ecstatic. Mr. G gave himself another big raise to celebrate. In addition, Manna had optimized store staffing and had gotten a significant productivity boost out of the employees in the store. That saved another $150 million. $250 million made a huge difference in the fast food industry.

So, the first real wave of robots did not replace all the factory workers as everyone imagined. The robots replaced middle management and significantly improved the performance of minimum wage employees. All of the fast food chains watched the Burger-G experiment with Manna closely, and by 2012 they started installing Manna systems as well. By 2014 or so, nearly every business in America that had a significant pool of minimum-wage employees was installing Manna software or something similar. They had to do it in order to compete.

In other words, Manna spread through the American corporate landscape like wildfire. And my dad was right. It was when all of these new Manna systems began talking to each other that things started to get uncomfortable.

Next Chapter (at MarshallBrain.com)

Jan 24

Are you living in a computer simulation?

This is a portion of a much longer document not written by me, but by NICK BOSTROM. Please visit his site by using the links he provided in the original work.


ARE YOU LIVING IN A COMPUTER SIMULATION?

 

BY
NICK BOSTROM

Department
of Philosophy, Oxford University

 

Homepage:
http://www.nickbostrom.com

[First
version: May, 2001; Final version July 2002]

Published in Philosophical Quarterly
(2003), Vol. 53, No. 211, pp. 243-255.

[This
document is located at http://www.simulation-argument.com] [pdf-version] [mirrored by PoolOfThought pdf-version]

 

ABSTRACT

This paper argues that at least one of the following propositions is true:
(1) the human species is very likely to go extinct before reaching a “posthuman” stage;
(2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof);
(3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations
is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.

 

I.
INTRODUCTION

Many works of science fiction as well as some forecasts by serious technologists and futurologists predict that enormous amounts of computing power will be available in the future. Let us suppose for a moment that these predictions are correct. One thing that later generations might do with their super-powerful computers is run detailed simulations of their forebears or of people like their forebears. Because their computers would be so powerful, they could run a great many such simulations. Suppose that these simulated people are conscious (as they would be if the simulations were sufficiently fine-grained and if a certain quite widely accepted position in the philosophy of mind is correct). Then it could be the case that the vast majority of minds like ours do not belong to the original race but rather to people simulated by the advanced descendants of an original race. It is then possible to argue that, if this were the case, we would be rational to think that we are likely among the simulated minds rather than among the original biological ones. Therefore, if we don’t think that we are currently living in a computer simulation, we are not entitled to believe that we will have descendants who will run lots of such simulations of their forebears. That is the basic idea. The rest of this paper will spell it out more carefully.

Apart from the interest this thesis may hold for those who are engaged in futuristic speculation, there are also more purely theoretical rewards. The argument provides a stimulus for formulating some methodological and metaphysical questions, and it suggests naturalistic analogies to certain traditional religious conceptions, which
some may find amusing or thought-provoking.

The structure of the paper is as follows. First, we formulate an assumption that we need to import from the philosophy of mind in order to get the argument started. Second, we consider some empirical reasons for thinking that running vastly many simulations of human minds would be within the capability of a future civilization that has developed many of those technologies that can already be shown to be compatible with known physical laws and engineering constraints. This part is not philosophically necessary but it provides an incentive for paying attention to the rest. Then follows the core of the argument, which makes use of some simple probability theory, and a section providing support for a weak indifference principle that the argument employs. Lastly, we discuss some interpretations of the disjunction, mentioned in the abstract, that forms the conclusion of the simulation argument.