Dec 04

IF (by Rudyard Kipling)

True in sports, business, and life in general:

IF you can keep your head when all about you
Are losing theirs and blaming it on you,
If you can trust yourself when all men doubt you,
But make allowance for their doubting too;
If you can wait and not be tired by waiting,
Or being lied about, don’t deal in lies,
Or being hated, don’t give way to hating,
And yet don’t look too good, nor talk too wise:

If you can dream – and not make dreams your master;
If you can think – and not make thoughts your aim;
If you can meet with Triumph and Disaster
And treat those two impostors just the same;
If you can bear to hear the truth you’ve spoken
Twisted by knaves to make a trap for fools,
Or watch the things you gave your life to, broken,
And stoop and build ’em up with worn-out tools:

If you can make one heap of all your winnings
And risk it on one turn of pitch-and-toss,
And lose, and start again at your beginnings
And never breathe a word about your loss;
If you can force your heart and nerve and sinew
To serve your turn long after they are gone,
And so hold on when there is nothing in you
Except the Will which says to them: ‘Hold on!’

If you can talk with crowds and keep your virtue,
‘ Or walk with Kings – nor lose the common touch,
if neither foes nor loving friends can hurt you,
If all men count with you, but none too much;
If you can fill the unforgiving minute
With sixty seconds’ worth of distance run,
Yours is the Earth and everything that’s in it,
And – which is more – you’ll be a Man, my son!

Aug 28

Some thoughts on Software Patents

I have read a lot of what I consider to be fairly convincing arguments relating to software patents and whether the courts should “allow them” or “ban them”. Before I had the insight I’m going to share below I was definitely a fence hopper, but I think I have finally satisfied myself with an answer. It takes a wee bit of imagination and a willingness to be somewhat philosophical to get there, but I think the thought process will get you there.

 

If you’ve seen “The Matrix” you know that almost the entire movie involves real people living and acting in a virtual world. If you’ve seen “The Matrix Reloaded” you’ll remember a scene where a ship is returning to realworld city of Zion and they are getting ready to enter the gates to the city. The city is very mechanical and computers are utilized to control everything, but do the people of Zion who operate the computers sit behind a keyboard? No they don’t. Instead they “plug in” in the real world transferring they minds into the virtual world in which someone (presumably other humans) has programmed all of these controls. The controls cool thing about these controls is that not only can they be laid out like a keyboard, but they can also be like a lever. It’s their own virtual world so they can build it however they want. They just need to be presented with an environment that they can manipulate to get “the job” done.

 

Now imagine current earth humans being able to allow their mind to live or work inside of a virtual world. Everything in that virtual world is actually software! Nothing is physical though it could be designed to look it and feel it. It may be designed such that your physical actions (ie grabbing a lever and pulling / pushing it) causethe software to behave in different ways, but it’s still not actually physical. How you interact with that software simply causes some state change in the outside world, but what you are interacting with IS software. Yes, the changes you introduce by manipulating the controls made available to you will cause some other software to cause changes in the outside world which will in turn cause a change to the view of the world presented to the those in the virtual world (and in the physical world). But it’s still software making it all work. The tools are software. The connections are software. The actor could even be software.

 

Once these types of systems are possible, and especially once they are common place, there could be a rush of what, in the physical world of today, we would call innovative people coming up with new widgets that can be used inside of this virtual world. These innovations will almost surely come with a price that would be paid by the programmer. In that case there would be a need for protection under some type of law in order to encourage people to create, test, and perfect them. Do we have a system that provides this sort of protection today? Yes, we do, and it is the patent system. It would also be applicable to this sort of situation considering the new types of “tools” that people would “physically” interact with inside the virtual world. All manner of things are possible in the real world today that we just knew wasn’t possible before (until someone innovated a way to do it), and the same will be true in the virtual world. Ways of doing things never even thought of will be, given the right motivation, not only thought of but implemented and improved upon. Different ways of looking at problems will cause unique solutions to become apparent. The solutions would be “obvious” once pointed out, but would be nonobvious prior. Why would someone dedicate their time to looking for alternate solutions if the answer will net them no reward? History shows us that they won’t… not to the same degree anyway.

 

Q: What about a hammer vs a “virtual hammer”? Would you really allow a patent on a virtual hammer that does the same thing in software world that it does in the real world? That seems like everything would get repatented with the only difference being that it is “in software”.

 

A: This question stems from one of the common errors untrained people make when judging patent validity. You can’t just look at the title, or the summary. Think about it. A software hammer wouldn’t be the same thing as a hardware hammer would it? Software doesn’t have physical nails to drive. But maybe a software hammer can be made such that it easy automates the binding of two or more components using a single connective module. Something that used to take 10 virtual actions can be easily rolled  up into the action of hitting the objects with a hammer. The hammer basically just does all of those steps that “physically” had to be done before and elminates them through some ingenious “piece of code”. Testing this peice of code and finding just the right tweaks for it came with a cost of thousands of lost operations (cpu cycles), mangled data, and even memory leaks that had to be dealt with before it became stable to be used in the virtual world. Why would someone give up these precious resources if it would not gain them some advantage? Now that it is done it is a easily copyable solution so what’s to stop another from copying it and using it without having put their own butts on the line? Copyright doesn’t do the trick as code can be rewritten (hell, translate it to another language and you’ll have to modify it to do so). You’re still using the same algorithm, but it obviously not the same code. Yes it is and you shouldn’t be allowed to steal the code, change the language, and call it new.

 

It is my belief that as things become more virtualized and as virtual reality starts to become both more real and more immersive that we will see more need for patents on things in the virtual world. These things are no doubt software. But they are also no doubt in need of protection.

 

To be continued… or is this one step too far?

 

And if we know that software should be patentable in the case of said eventual world, then software should be patentable now due to the simple fact that the simulation argument leads there.

 

May 14

To lock down or not lock down

A while back I did a post about making sure that you lock down your wifi so that people do not do nefarious things on your connection and get you into trouble. Well, apparently that was not the best “legal” suggestion. Apparently, if your wifi is open and someone does something wrong then, well, it could have been anyone that was using your IP. But if your wifi is closed and something is done wrong (beginning at your IP address) then you are viewed as that much more likely to be the target of an investigation. Afterall, who could have been using your IP? Your wifi was closed!

According to the Electronic Frontier Foundation (EFF), keeping your router OPEN may offer more legal protection than having it closed.

If you run an open wireless network, you may be able to receive significant legal protection from Section 230 of the CDA (against civil and state criminal liability for what others publish through the service) and Section 512 of the DMCA (against copyright claims based on what others use the service for). While these protections are not complete, EFF regularly engages in impact litigation to help ensure that these laws offer as strong protection to network operators as possible.

The fact is that wireless router security is often viewed as something you just set up and then leave alone and it works to keep the bad guys off your line. However, wireless security is relatively weak and much of it can be broken. It won’t be long before the bad guys have access to your locked router and start making trouble. When they do, it will look like YOU are the one making trouble. On the one hand, you hate to give the bad guys a free ride, but on the other hand you would hate to get punished for what they do if they stole your ride and did something inappropriate with it.

I continue to go back and forth on this one. I have gone months with my router open, and then some time with it closed. I usually have to close it due to too much bandwidth being used. My netflix will start lagging (don’t mess with my Sarah Conner Chronicles!) or whatever and I know that someone is getting a little happy with my bandwidth.

It makes me nervous both ways to be honest. I have several houses with teenagers that live around me, all with wireless reach. Do I want them going to sites or performing illegal activities over my router? Nope. Do I want them using up all my bandwidth? Nope. Do I want to be nice and allow for free access? Yes. Do I want to have someone crack my WEP, gain access to my router, and then do unruly things so that it appears it was me? No way! So what I do? What would you do?

My plan is to in general go open wireless. Sometimes I’ll close the open access if I have bandwidth hogging issues and then I’ll open it back up once I think they’ve gotten the point. If you come around and don’t find an open network currently available don’t be discouraged. I have likely gone into non-sharing mode for a short time in order to get the bandwidth hogs to move along and will reopen for public use soon enough. Really, this isn’t much of a change. I like to provide a needed service, and I understand the need for open wireless points. Now that I see there are even legal “goodies” to go along with having it open I feel even better about the way I’ve operated historically and will continue to lean towards open, available wireless.

May 04

Do it yourself… atleast once

Not too long ago my wife and I purchased a new house. Well, actually, there’s nothing new about this house. It’s about 40 years old and full of “issues”. Some of them are purely a product of neglect (it was vacant for almost 2 years prior to our purchasing it), some of them are due to the house’s age, and some of them are simply “things I wish were different”. I’ll be learning a lot of new skills on this one…

I have a tendency to avoid paying someone else to do something until I myself have done the same thing. For example, when I was 19 or so I decided I wasn’t going to take my car to get the oil changed. I’d change it myself. It did not end well…  I emptied the wrong fluid. I dumped the manual transmission fluid when I pulled the plug ( which explained why my oil was purple) and before I knew it my car was in the shop anyway… for a more expensive fix. I went ahead and finished the oil change myself though first.

So why did I decide to do it myself? Is it because I like working on cars? Nope, not really (though I do like to understand how they work just in case). It’s because I wanted to know (1) can I do this myself and save some money and (2) If I can do it myself for less money, can I save enough money to make it worth my time. If there answer to (2) is “no”, then I simply won’t do it anymore. I’ll pay someone else, but I want to know what I’m paying them to do.

I felled a tree with an axe last weekend. It was very satisfying. The tree was approximately 40ft tall with a trunk diameter of around 10 inches. It was a lot of work. And despite the satisfying feeling of watching the tree fall to the ground caused by my sweat and determination I now know WHY I would pay someone to cut down any tree bigger than that one. It is simply not worth my time / pain / equipment / etc to do it myself. Previously when getting estimates I might have thought “200 dollars for that tree… is this guy trying to rip me off?” or “I have to get 5 estimates just to be sure everyone is in the right ballpark”. But now, having done it myself, I know what I would quote myself, and I know that it likely takes me two or three times as long as a “pro” so I can adjust accordingly. I also know that I can spend the time I would spend on the tree working on a new computer program… or doing some house maintenance I’m actually good at… which is likely a far better use of my time. Heck, I might be able to make the $300 working on computer stuff in the amount of time it would have taken me to cut down the tree and haul it out to the curb. In that case, I can rest well knowing that both the contractor and I win. I can write that check with confidence and without regrets or hesitation.

When possible, I suggest doing the things you would pay someone else to do atleast once. Maybe you’ll find you like it and are good at. Maybe you’ll just reaffirm your decision to let someone else do it. Either way, odds are you’ll learn something useful.

(I pay someone to change my oil. I can get it changed for about $13 at the right time of day and it only takes about 10 minutes. I can barely buy all the supplies for that price, and it would likely take me an hour. I know how, just in case, but for now, it’s worth my time and lack of frustration to write the check and rest well knowing that it truly is the right decision as opposed to the “easy” decision.)

Jun 05

Some Wishful thinking explained

According to this article (http://www.physorg.com/news158928941.html) quantum theory
may explain some cases of “wishful thinking”.

(PhysOrg.com) — Humans don’t always make the most rational decisions. As studies have shown, even when logic and reasoning point in one direction, sometimes we chose the opposite route, motivated by personal bias or simply “wishful thinking.” This paradoxical human behavior has resisted explanation by classical decision theory for over a decade. But now, scientists have shown that a quantum probability model can provide a simple  explanation for human decision-making – and may eventually help explain the success of human cognition overall.
Consider the following scenerio. You are playing a game. In this game you are given
the following things:
Only A or B can happen.
You can respond to any event with either X or Y.
If you KNOW A happens – the response with the highest probability of gain is X
If you KNOW B happens – the response with the highest probability of gain is X
Why in the world would you ever not do X? It seems you always should, right? Well,
what if I told you that X is really kind of a shady thing to do? It’s still the best
thing for you to do to come out on top, but it’s kind of “wrong”.
Here’s another scenerio…
“If you were asked to gamble in a game in which you had a 50/50 chance
to win $200 or lose $100, would you play?
In one study, participants were told that they
had just played this game, and then were asked to choose whether to try the same gamble
again. One-third of the participants were told that they had won the first game, one-third
were told they had lost the first game, and the remaining one-third did not know the outcome of their first game. Most of the participants in the first two scenarios chose to play
again (69% and 59%, respectively), while most of the participants in the third scenario
chose not to (only 36% played again). These results violate the “sure thing principle,” which
says that if you prefer choice A in two complementary known states (e.g., known winning
and known losing), then you should also prefer choice A when the state is unknown. So why
do people choose differently when confronted with an unknown state?
A different type of problem… Prisoners delimma.
“In their study, the scientists compared two models, one based on Markovian classical probability
theory and the other based on quantum probability theory. They modeled a game based on
the Prisoner’s Dilemma, which is similar to the gambling game. Here, participants were asked if they wanted to cooperate with or defect from an imaginary partner. Overall, each partner would receive larger pay-outs if they defected, making defecting the rational choice. However, if both partners cooperated, they would each receive a higher pay-out than if both defected. Similar to the results from the gambling games, studies have shown that participants who were told that their partner had defected or cooperated on the first round usually chose to defect on the second round (84% and 66%, respectively). But participants who did not know their partner’s previous decision were more likely to cooperate than the others (only 55% defected). It seems as if these individuals were trying to give their partners the benefit of the doubt, at the expense of making the rational choice.”
What does it mean?

I personally, think it should be called “benefit of the doubt thinking” rather than wishful thinking. The article describes how in the “unknown” the other side is viewed as a mirror of themselves. Read the article for a better, deeper explanation. It’s worth the read if you are interested in that sort of thing.

May 30

Robots in War (2)

This is another article I read recently about robots in wars. It is more philosophical in nature so keep that in mind as you read. Please consult the full text of the article for more. I really like the anecdotes about the swiss army accidentally invading Liechtenstein and how a British Army Platoon accidentally invaded a Spanish beach. Would these problems be eliminated with a Robot army? See the article for an answer.

I will be trying to dig up the paper itself (Araro’s “How Just Could a Robot War Be?”) and will also be doing some research into exactly what defines a “Just War”. I’ll post my findings at a later date.

I’ve included two long excerpts from the article below. Enjoy, and please consult the original article as it is quite well done.

In a fascinating paper entitled “How Just Could a Robot War Be?”, philosopher Peter Asaro of Rutgers University explores a number of robot war scenarios.

Asaro imagines a situation in which a nation is taken over by robots — a sort of revolution or civil war. Would a third party nation have a just cause for interceding to prevent this?

Asaro concludes that the use of autonomous technologies such as robot soldiers is neither “completely morally acceptable nor completely morally unacceptable” according to the just war theory formulated by Michael Walzer.

Just war theory defines the principles underlying most of the international laws regulating warfare, including the Geneva and Hague Conventions. Walzer’s classic book Just and Unjust Wars was a standard text at the West Point Military Academy for many years, although it was recently removed from the required reading list.

Asaro asserts that robotic technology, like all military force, could be just or unjust, depending on the situation.

h+: We’re using semi-autonomous robots now in Iraq and, of course, we’ve been using smart bombs for some time now. What is the tipping point – at what point does a war become a “robot war”?

PETER ASARO: There are many kinds of technologies being used already by the U.S. military, and I think it is quite easy to see the U.S. military as being a technological system. I wouldn’t call it robotic yet, though, as I think there is something important about having a “human-in-the-loop,” even if the military is trying to train soldiers to behave “robotically” and follow orders without question.

I think there is always a chance that a soldier will question a bad order, even if they are trained not to, and there is a lot of pressure on them to obey.

Ron Arkin is a roboticist at Georgia Tech who has designed an architecture for lethal robots that allows them to question their orders. He thinks we can actually make robots super-moral, and thereby reduce civilian casualties and war crimes.

I think Ron has made a good start on the kinds of technological design that might make this possible. The real technical and practical challenges are in properly identifying soldiers and civilians.

The criteria for doing this are obscure, and humans often make mistakes because information is ambiguous, incomplete, and uncertain. A robot and its computer might be able to do what is optimal in such a situation, but that might not be much better than what humans can do.

More importantly, human soldiers have the capacity to understand complex social situations, even if they often make mistakes because of a lack of cultural understanding.

I think we are a long way from achieving this with a computer, which at best will be using simplified models and making numerous potentially hazardous assumptions about the people they are deciding whether or not to kill.

Also, while it would surely be better if no soldiers were killed, having the technological ability to fight a war without casualties would certainly make it easier to wage unjust and imperial wars. This is not the only constraint, but it is probably the strongest one in domestic U.S. politics of the past 40 years or so.

By the way, I see robots primarily as a way to reduce the number of soldiers needed to fight a war. I don’t see them improving the capabilities of the military, but rather just automating them. The military hold an ideal vision of itself as operating like a well-oiled machine, so it seems that it can be rationalized and automated and roboticized. The reality is that the [human] military is a complex socio-technical system, and the social structure does a lot of hidden work in regulating the system and making it work well. Eliminating it altogether holds a lot of hidden dangers.

h+: You talk about the notion that robots could have moral agency – – even superior moral agency –- to human soldiers. What military would build such a soldier? Wouldn’t such a solider be likely to start overruling the military commanders on policy decisions?

PA: I think there are varying degrees of moral agency, ranging from amoral agents to fully autonomous moral agents. Our current robots are between these extremes, though they definitely have the potential to improve.

I think we are now starting to see robots that are capable of taking morally significant actions, and we’re beginning to see the design of systems that choose these actions based on moral reasoning. In this sense, they are moral, but not really autonomous because they are not coming up with the morality themselves… or for themselves.

They are a long way from being Kantian moral agents –- like some humans –- who are asserting and engaging their moral autonomy through their moral deliberations and choices. [Philosopher Immanuel Kant’s “categorical imperative” is the standard of rationality from which moral requirements are derived.]

We might be able to design robotic soldiers that could be more ethical than human soldiers.

Robots might be better at distinguishing civilians from combatants; or at choosing targets with lower risk of collateral damage, or understanding the implications of their actions. Or they might even be programmed with cultural or linguistic knowledge that is impractical to train every human soldier to understand.

Ron Arkin thinks we can design machines like this. He also thinks that because robots can be programmed to be more inclined to self-sacrifice, they will also be able to avoid making overly hasty decisions without enough information. Ron also designed architecture for robots to override their orders when they see them as being in conflict with humanitarian laws or the rules of engagement. I think this is possible in principle, but only if we really invest time and effort into ensuring that robots really do act this way. So the question is how to get the military to do this.

It does seem like a hard sell to convince the military to build robots that might disobey orders. But they actually do tell soldiers to disobey illegal orders. The problem is that there are usually strong social and psychological pressures on soldiers to obey their commanders, so they usually carry them out anyway. The laws of war generally only hold commanders responsible for war crimes for this reason. For a killing in war to truly be just, then the one doing the killing must actually be on the just side in the war. In other words, the combatants do not have equal liability to be killed in war. For a robot to be really sure that any act of killing is just, it would first have to be sure that it was fighting for a just cause. It would have to question the nature of the war it is fighting in and it would need to understand international politics and so forth.

The robots would need to be more knowledgeable than most of the high school graduates who currently get recruited into the military. As long as the war is just and the orders are legal, then the robot would obey, otherwise it wouldn’t. I don’t think we are likely to see this capability in robots any time soon.

I do think that human soldiers are very concerned about morality and ethics, as they bear most of the moral burdens of war. They are worried about the public reaction as well, and want to be sure that there are systems in place to prevent tragic events that will outrage the public. It’s not impossible to try to control robot soldiers in this way. What we need is both the political will, and the technological design innovation to come together and shape a new set of international arms control agreements that ensures that all lethal robots will be required to have these types of ethical control systems.

Of course, there are also issues of proliferation, verification and enforcement for any such arms control strategy. There is also the problem of generating the political will for these controls. I think that robotic armies probably have the potential to change the geo-political balance of power in ways far more dramatic than nuclear arms.

We will have to come up with some very innovative strategies to contain and control them. I believe that it is very important that we are not naive about what the implications of developing robotic soldiers will mean for civil society.

May 30

Robots in War (1)

The idea of using robots in a war type environment has always been there. Atleast it has always been there for those in the “Terminator generation”. However, those in the Starwars generation would have seen C3PO or R2D2 blasting away with lasers. Before that I’m sure there were other examples. So suffice it to say robots in wars is not a new concept. It is however reality. Check out this article for some of the details. It’s a pretty cool read. Some excerpts are below.

This first excerpt covers an actual malfunction of such technology.

A few minutes before nine in the morning, and the young soldiers have no idea of the horror that is about to strike them. They are taking part in a massive military training exercise, involving 5,000 troops, and are about to showcase the latest in robotic weapons technology.

The MK5 anti-aircraft system, with two huge 35mm cannons, is essentially a vast robotic weapon, controlled by a computer.

But while it’s one thing when your laptop freezes up, it’s quite another when it is controlling an auto-loading magazine containing 500 high-explosive rounds.

As the display begins, the South African troops sense quickly that something is terribly wrong. The system appears to jam – but what happens next is truly chilling.

‘There was nowhere to hide,’ one witness stated in a report. ‘The rogue gun began firing wildly, spraying high explosive shells at a rate of 550 a minute, swinging around through 360 degrees like a high-pressure hose.’

One young female officer rushes forward to try to shut down the robotic gun – but it is too late.

‘She couldn’t, because the computer gremlin had taken over,’ the witness later said.

The rounds from the automated gun rip into her and she collapses to the ground. By the time the robot has emptied its magazine, nine soldiers lie dead (including the woman officer).

Another 14 are seriously injured. The report will later blame the bloodbath on a ‘software glitch’.

It sounds like a blood-spattered scene from the new blockbuster Terminator Salvation, in which a military computer takes over the world using an army of robot soldiers.

But this bloodbath actually happened. And concern is mounting that it may happen again and again, as a growing number of military robots flood the battlefield.

And this one talks about the various sizes and potential issues that could happen.

‘Just look at the numbers,’ he says. ‘We went into Iraq in 2003 with zero robots. Now we have 12,000 on the ground. They come in all shapes and sizes, from tiny machines to robots bigger than an 18-wheeler truck.

There are ones that fit on my little finger and ones with the wingspan of a football field.’

The U.S. military is the biggest investor in robot soldiers. Its robot programme, dubbed Future Combat Systems, is budgeted to spend $240 billion over the next 20 years.

But Singer is worried that in the rush to bring out ever more advanced systems, many lethal robots will be rolled out before they are ready.

It is a terrifying prospect. ‘Imagine a laptop armed with an M16 machine-gun,’ one expert said.

According to Noel Sharkey, a professor of robotics and artificial intelligence at Sheffield University, one of the biggest concerns is that this growing army of robots could stray out of communication range.

‘Just imagine a rogue robot roaming off the battlefield and into a nearby village,’ he says. ‘Without experts to shut it down, the results could be catastrophic.’

There are robots that can move through sand and water. There are robots that can hover, Robots that can fly. Humanoid robots. There are robots that can, utilize a machine gun with the accuracy of a sniper shooting an apple from hundreds of meters. These robots can be armed with grenade launchers, machine guns, and rocket launchers. They’re not so smart, but they are good at what they are told to do.

Of course, as with any weapon technology there is fear it could fall into the wrong hands (assuming it started in the right hands to begin with). There is also fear of the robots making mistakes. For example, it might misidentify something as a threat. Can robots be made to understand the rules of engagement? These are questions that have to be dealt with and their consequences understood. But make no mistake, we the world have proceeded down this path. It is happening and hopefully we can keep it under control or atleast stay ahead of the curve.

I will leave you with this final thought from the article.

‘Body bags containing real soldiers coming home affect the government electorally,’ says Sharkey. ‘Once you start using robots, you remove this problem.’

But do we really want going to war to be as easy, and impersonal, as playing a computer game?

Jan 24

Are you living in a computer simulation?

This is a portion of a much longer document not written by me, but by NICK BOSTROM. Please visit his site by using the links he provided in the original work.


ARE YOU LIVING IN A COMPUTER SIMULATION?

 

BY
NICK BOSTROM

Department
of Philosophy, Oxford University

 

Homepage:
http://www.nickbostrom.com

[First
version: May, 2001; Final version July 2002]

Published in Philosophical Quarterly
(2003), Vol. 53, No. 211, pp. 243-255.

[This
document is located at http://www.simulation-argument.com] [pdf-version] [mirrored by PoolOfThought pdf-version]

 

ABSTRACT

This paper argues that at least one of the following propositions is true:
(1) the human species is very likely to go extinct before reaching a “posthuman” stage;
(2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof);
(3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations
is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.

 

I.
INTRODUCTION

Many works of science fiction as well as some forecasts by serious technologists and futurologists predict that enormous amounts of computing power will be available in the future. Let us suppose for a moment that these predictions are correct. One thing that later generations might do with their super-powerful computers is run detailed simulations of their forebears or of people like their forebears. Because their computers would be so powerful, they could run a great many such simulations. Suppose that these simulated people are conscious (as they would be if the simulations were sufficiently fine-grained and if a certain quite widely accepted position in the philosophy of mind is correct). Then it could be the case that the vast majority of minds like ours do not belong to the original race but rather to people simulated by the advanced descendants of an original race. It is then possible to argue that, if this were the case, we would be rational to think that we are likely among the simulated minds rather than among the original biological ones. Therefore, if we don’t think that we are currently living in a computer simulation, we are not entitled to believe that we will have descendants who will run lots of such simulations of their forebears. That is the basic idea. The rest of this paper will spell it out more carefully.

Apart from the interest this thesis may hold for those who are engaged in futuristic speculation, there are also more purely theoretical rewards. The argument provides a stimulus for formulating some methodological and metaphysical questions, and it suggests naturalistic analogies to certain traditional religious conceptions, which
some may find amusing or thought-provoking.

The structure of the paper is as follows. First, we formulate an assumption that we need to import from the philosophy of mind in order to get the argument started. Second, we consider some empirical reasons for thinking that running vastly many simulations of human minds would be within the capability of a future civilization that has developed many of those technologies that can already be shown to be compatible with known physical laws and engineering constraints. This part is not philosophically necessary but it provides an incentive for paying attention to the rest. Then follows the core of the argument, which makes use of some simple probability theory, and a section providing support for a weak indifference principle that the argument employs. Lastly, we discuss some interpretations of the disjunction, mentioned in the abstract, that forms the conclusion of the simulation argument.