Oct 02

iPhone and iPad naming conventions, icon descriptions, and icon sizes

The below table is a description of iphone icons and ipad icons, what they’re sizes and should be, and what their standardized names are. This information is all over, but the a thorough and yet concise version (and what i based the below on) can be found here.

Name Size (pixels) Platform
Icon.png 57 x 57 Universial application icon
Icon-settings.png 29 x 29 Universial application icon for settings area. Alternative name: Icon-Small.png
Icon~ipad.png 72 x 72 iPad application icon. Alternative name: Icon-72.png Add some smaller (iPad doc: 64×64, other optional 32×32, 24×24, 16×16) custom icons to your project. See comments.
Icon-spot~ipad.png 50 x 50 iPad icon for spotlight search. Alternative name: Icon-Small-50.png iPhone OS trims 1 pixel from each side and adds a drop shadow. The actual size is 48×48 pixels.
iTunesArtwork.png 512 x 512 Universial application icon for iTunes App Store. Uploaded separately to iTunes. It’s included in the app bundle too, file name: iTunesArtwork. In an iPad application iPhone OS uses this image to generate the large (320×320) document icon if it is not supplied otherwise.
Default.png 320 (w) x 480 (h) iPhone/iPod 2, 3 portrait launch image
Default@2x.png 640 (w) x 960 (h) iPhone 4 hi-res portrait launch image
Default~ipad.png 768 (w) x 1004 (h) iPad. Specifies the default portrait launch image. This image is used if a more specific image is not available. Use full size template (768×1024) to design this launch image. The 20 pixels height statusbar is on by default and occupies the top of the screen, aka the 1004 rows vs. 1024.
Optional icons and images:
Icon@2x.png 114 x 114 iPhone 4 hi-res application icon
Icon-settings@2x.png 58 x 58 iPhone 4 hi-res application icon for settings/search area
Icon-doc.png 22 (w) x 29 (h) Universial document icon
Icon-doc@2x.png 44 (w) x 58 (h) iPhone 4 hi-res document icon
Icon-doc~ipad.png 64 x 64 iPad document icon (small)
Icon-doc320~ipad.png 320 x 320 iPad document icon (large)
Background-xxx.png 320 (w) x 480 (h)
640 (w) x 960 (h)
768 (w) x 1024 (h)
iPhone/iPod Touch 2, 3 background image,
iPhone 4 background image, full size
iPad background image, full size. For most projects the status bar is hidden, so use full screen size by default.
Default-PortraitUpsideDown~ipad.png 768 (w) x 1004 (h) iPad. Specifies an upside-down portrait version of the launch image. The height of this image should be 1004 pixels and the width should be 768. This file takes precedence over the Default-Portrait.png image file for this specific orientation.
Default-LandscapeLeft~ipad.png 1024 (w) x 748 (h) iPad. Specifies a left-oriented landscape version of the launch image. The height of this image should be 748 pixels and the width should be 1024. This file takes precedence over the Default-Landscape.png image file for this specific orientation.
Default-LandscapeRight~ipad.png 1024 (w) x 748 (h) iPad. Specifies a right-oriented landscape version of the launch image. The height of this image should be 748 pixels and the width should be 1024. This file takes precedence over the Default-Landscape.png image file for this specific orientation.
Default-Portrait~ipad.png 768 (w) x 1004 (h) iPad. Specifies the generic portrait version of the launch image. The height of this image should be 1004 pixels and the width should be 768. This image is used for right side-up portrait orientations and takes precedence over the Default~ipad.png image file. If a Default-PortraitUpsideDown.png image file is not specified, this file is also used for upside-down portrait orientations as well.
Default-Landscape~ipad.png 1024 (w) x 748 (h) iPad. Specifies the generic landscape version of the launch image. The height of this image should be 748 pixels and the width should be 1024. If a Default-LandscapeLet.png or Default-LandscapeRight.png image file is not specified, this image is used instead. This image takes precedence over the Default.png image file.
Aug 28

Some thoughts on Software Patents

I have read a lot of what I consider to be fairly convincing arguments relating to software patents and whether the courts should “allow them” or “ban them”. Before I had the insight I’m going to share below I was definitely a fence hopper, but I think I have finally satisfied myself with an answer. It takes a wee bit of imagination and a willingness to be somewhat philosophical to get there, but I think the thought process will get you there.


If you’ve seen “The Matrix” you know that almost the entire movie involves real people living and acting in a virtual world. If you’ve seen “The Matrix Reloaded” you’ll remember a scene where a ship is returning to realworld city of Zion and they are getting ready to enter the gates to the city. The city is very mechanical and computers are utilized to control everything, but do the people of Zion who operate the computers sit behind a keyboard? No they don’t. Instead they “plug in” in the real world transferring they minds into the virtual world in which someone (presumably other humans) has programmed all of these controls. The controls cool thing about these controls is that not only can they be laid out like a keyboard, but they can also be like a lever. It’s their own virtual world so they can build it however they want. They just need to be presented with an environment that they can manipulate to get “the job” done.


Now imagine current earth humans being able to allow their mind to live or work inside of a virtual world. Everything in that virtual world is actually software! Nothing is physical though it could be designed to look it and feel it. It may be designed such that your physical actions (ie grabbing a lever and pulling / pushing it) causethe software to behave in different ways, but it’s still not actually physical. How you interact with that software simply causes some state change in the outside world, but what you are interacting with IS software. Yes, the changes you introduce by manipulating the controls made available to you will cause some other software to cause changes in the outside world which will in turn cause a change to the view of the world presented to the those in the virtual world (and in the physical world). But it’s still software making it all work. The tools are software. The connections are software. The actor could even be software.


Once these types of systems are possible, and especially once they are common place, there could be a rush of what, in the physical world of today, we would call innovative people coming up with new widgets that can be used inside of this virtual world. These innovations will almost surely come with a price that would be paid by the programmer. In that case there would be a need for protection under some type of law in order to encourage people to create, test, and perfect them. Do we have a system that provides this sort of protection today? Yes, we do, and it is the patent system. It would also be applicable to this sort of situation considering the new types of “tools” that people would “physically” interact with inside the virtual world. All manner of things are possible in the real world today that we just knew wasn’t possible before (until someone innovated a way to do it), and the same will be true in the virtual world. Ways of doing things never even thought of will be, given the right motivation, not only thought of but implemented and improved upon. Different ways of looking at problems will cause unique solutions to become apparent. The solutions would be “obvious” once pointed out, but would be nonobvious prior. Why would someone dedicate their time to looking for alternate solutions if the answer will net them no reward? History shows us that they won’t… not to the same degree anyway.


Q: What about a hammer vs a “virtual hammer”? Would you really allow a patent on a virtual hammer that does the same thing in software world that it does in the real world? That seems like everything would get repatented with the only difference being that it is “in software”.


A: This question stems from one of the common errors untrained people make when judging patent validity. You can’t just look at the title, or the summary. Think about it. A software hammer wouldn’t be the same thing as a hardware hammer would it? Software doesn’t have physical nails to drive. But maybe a software hammer can be made such that it easy automates the binding of two or more components using a single connective module. Something that used to take 10 virtual actions can be easily rolled  up into the action of hitting the objects with a hammer. The hammer basically just does all of those steps that “physically” had to be done before and elminates them through some ingenious “piece of code”. Testing this peice of code and finding just the right tweaks for it came with a cost of thousands of lost operations (cpu cycles), mangled data, and even memory leaks that had to be dealt with before it became stable to be used in the virtual world. Why would someone give up these precious resources if it would not gain them some advantage? Now that it is done it is a easily copyable solution so what’s to stop another from copying it and using it without having put their own butts on the line? Copyright doesn’t do the trick as code can be rewritten (hell, translate it to another language and you’ll have to modify it to do so). You’re still using the same algorithm, but it obviously not the same code. Yes it is and you shouldn’t be allowed to steal the code, change the language, and call it new.


It is my belief that as things become more virtualized and as virtual reality starts to become both more real and more immersive that we will see more need for patents on things in the virtual world. These things are no doubt software. But they are also no doubt in need of protection.


To be continued… or is this one step too far?


And if we know that software should be patentable in the case of said eventual world, then software should be patentable now due to the simple fact that the simulation argument leads there.


Jan 14

Website color charts

Anyone creating websites has come across a need for color charts. Whether picking a background, a font color, a border color, or any other colored control / feature it is crucial to have some sort of reference to work from. I’m not that artsie (sp?) so I don’t have the color codes memorized due to high use, but I also hate going and looking for decent ones.

As I find decent charts, I’ll add them here.

The main reason I like this first one listed is that it has not only the codes, but it shows me pictures of each of the CSS standard color codes. I am often coding away and am trying to pick a color but the intellisense only has the color NAME… not a sample next to it. Now I can whip out my handy dandy color chart via the link below, find the color I want visually, and then use the name specified next to it. Life is good.

Color Chart: http://www.neopets.com/~triflot

Dec 26

Link checker – Bad neighborhood

I often get requests for me to add links to my sites. Usually it is just someone looking for something simple that will deliver them some relevant traffic.
What I have found though is that one should ALWAYS verify that the link destination is okay. It should not be in a bad neighborhood. In addition, it should not link out to bad neighborhoods. These bad neighborhoods will get sites that link to them penalized in the search engines.  That’s right – the sites that you link to can get your site penalized. Not only that, but the sites THEY link to might get your site penalized.
The link below has a bad-neighborhood checker. It will scan a URL and determine if there are questionable links. Then it will scan the linked to pages to see if any of their links are questionable in nature. It’s a great little tool and I highly recommend using it.

Dec 26

Don’t try to beat the search engine

I just read the following article while I was trying to determine if static named pages are better for seo than those with parameters in the url. I’m always impressed with the things that are returned when I google something. It is often not entirely relevant to what I was looking for, but can be very interesting anyway.

If you get a chance and you are interested in SEO at all, you might give the following a read:


From the article:


So what’s the bottom line? There are really two major things you need to do:

* Learn how to communicate to the search engine what your site is about. Many of the problems listed above relate to common practices that make the search engine’s job harder, or even impossible. Learning how to build your site so that the search engine can easily determine the unique value of your site is an outstanding idea.

* Don’t spend your time figuring out how to beat the search engine. It’s just not a good place to be. You may even succeed in the short term. But if you do succeed in tricking them in the short term, the day will come when you wake up in the morning and a significant piece of your business has disappeared overnight. Not a good feeling at all.

Take the same energy you would have invested in the tricks and invest it in great content for your site, and in the type of marketing programs you would have implemented if the search engines did not exist.

This is how you can grow your business for the long term.

Jun 12

Upgrading MySql

Apparently the last time I set up MySql I did it using some cheap (like crappy) installer. It didn’t include any of the extension dlls. I’ve now decided it is much less of a headache to simply download the zip file and do the configuration myself. A link to the instructions I followed. Worked great when I did so.

Installing PHP.

The path used for PHP is just an example, you can choose another if you want.

  • Extract the archive in C:\PHP (Rename it if necessary)
  • Rename C:\PHP\php.ini-recommended to C:\PHP\PHP.INI

Configure the Session directory

  • Open C:\PHP\PHP.INI

Make sure you remove the initial semicolons!

  • Find
;session.save_path = "/tmp"

replace it with

session.save_path = "C:\WINDOWS\TEMP"
  • Find
; **You CAN safely turn this off for IIS, in fact, you MUST.**
; cgi.force_redirect = 1

replace it with

; **You CAN safely turn this off for IIS, in fact, you MUST.**
cgi.force_redirect = 0

Configure PHP extensions

  • Find
; Directory in which the loadable extensions (modules) reside.
extension_dir = "./"

replace it with

; Directory in which the loadable extensions (modules) reside.
extension_dir = "C:\PHP\EXT"

If you can’t find “extension_dir” in your C:\PHP\PHP.INI file, add it to the bottom of the file.

MySQL extension

As Gallery 2 uses a database to store it’s metadata, you need to enable database support in PHP. This guide uses MySQL, but the procedure would be similar for Postgres or Oracle.

  • Find

replace it with


Gettext extension

  • In order to make the localization of g2 (multi-language) work you need the gettext extension of php. This can be enabled in php.ini. G2 does hint you for that. However gettext is a little strange extension.
  • Find

replace it with


But now comes the crux. php_gettext.dll is depending on \php-install-dir\dll\iconv.dll All other extensions work flawlessly for me. But gettext.dll required me to put iconv.dll into a dir that is included in the searchpath. E.g. /windows/system32 I then overreacted and copied all dll’s to that /system32 dir. The manual of php 4 tells you to copy the dll’s to the /php-install-dir/ but that only works if you add manually the php dir into the path statement of windows.

GD2 extension

Find the extension in your php.ini and remove the # in front of the line ;extension=php_gd2.dll

  • Find

replace it with



Make PHP available to IIS

Set the system path to include C:\PHP

  • Click on My Computer -> Properties -> Advanced -> Environment Variables
  • Scroll down the System Variables (bottom window) and doubeclick on the PATH variable
  • Add the following to the end (make sure you add the initial semicolin)
  • Click OK

Make PHP.INI available to PHP

  • While you still have the Environment variables window open click new
  • In the Variable Name field enter
  • Set the Variable Value to

This will make PHP.INI available to PHP (We will verify this later)

Configuring IIS

You have a choice of whether to setup PHP to use the ISAPI extension, CGI executable, or using FastCGI. The ISAPI extension is not fully stable, and the CGI executable’s performance is very poor because after every request the php-cgi.exe executable is unloaded. So if the php-cgi.exe executable is always loaded into memory then that would greatly increase the performance. There are two ways of doing this.
1. Spend $500 for Zend’s own WinEnabler [2]
2. Setup the free FastCGI program that does the same thing as WinEnabler

The recommended way of running PHP on IIS is using FastCGI. Below you will find instructions on how to setup PHP using ISAPI but if your site is going to serve lots of pages, you will probably want to go with FastCGI.

Add the PHP ISAPI extension to IIS Web Service Extensions

  • Click on Start -> Administrative Tools -> Internet Information Services (IIS) Manager
  • Expand the local computer in the left pane
  • Click on “Web Service Extensions” in the left pane
  • In the right pane, click the blue underlined text, “Add a new Web service extension…”
  • Enter “PHP5 ISAPI” as the “Extension name”
  • Click the “Add…” button and browse to the php5isapi.dll file in your C:\PHP install directory
  • Click Open -> OK
  • Check the “Set extension status to Allowed” checkbox and click “OK”

Adding the PHP parsing to your IIS website

Note: You can add this either on the top-level Web Sites or to individual web sites beneath it. If you add it to the top-level web sites node in the left pane, it applies to all websites on the IIS instance. You can also choose to only install it on specific websites beneath the top-level node, in that case it will only apply to that site. The procedure for adding is the same for both scopes.

Be careful when applying this to the top-level node, as it will override settings defined in the individual websites beneath it.

  • In the left pane, expand Web Sites
  • Right Click the website you want to configure, and select properties
  • Open the Home Directory tab
  • Click Configuration
  • Then go to the Mappings tab
  • Click Add…
  • Enter the full path to php5isapi.dll in the “Executable” textbox or click the Browse button to browse your way to it. If you have followed the path recommendations in this guide, the fill path should be C:\PHP\php5isapi.dll
  • Enter .php in the Extension textbox
  • Select Limit to, enter GET,POST,HEAD
  • Click OK and verify that .PHP is now included in the Application extensions listbox
  • Click OK

This configures IIS to understand what to do with files ending with .php

Adding scripting permissions

  • While still having the Web Site Properties dialog box open, click Home Directory
  • Make sure that “Execute permissions” dropdown is set to “Scripts only”.
  • Click OK
Jun 12

Eventum software

I recently set up a issue tracking software called Eventum. Boy was it FUN! The 3 minute install took me about 4 1/2 hours. I had to upgrade mySql (actually I chose to when I started to get an unexpected error during the install – turns out my install was missing some dlls). That upgrade was a real pain in the butt… until it I got it right. It only took about 10 minutes once I figured out exactly what to do. Then Eventum only took about 10 minutes from that point. Here’s a gotcha in eventum though. The source of this information is here. I just removed that setting and POOF… everything worked.

From: Date: April 26 2007 11:41am
Subject: SUCCESS!! Eventum 2.0.1 Installation on WinXP/SP2+Apache-2.2.4+PHP-5.2.1+MySQL-5.0.37-nt community edition
List-Archive: http://lists2.mysql.com/eventum-users/3965
Message-Id: <4847DBE579D78B4A9C610E2EA77B463413BED730@inblrk999msx.in002.siemens.net>
MIME-Version: 1.0
Content-Type: text/plain;
Content-Transfer-Encoding: quoted-printable

Hi all,

A Million Thanks to Gaetano Giunta for providing the "breakthrough" =

I feel the Eventum Wiki needs to be updated with the following info =

1. The SQL Mode should NOT have "STRICT_TRANS_TABLE".  This is set by =
   I changed it through the MySQL Administrator GUI and restarted the =
   Service.  The "SQL Mode" setting is under "Advanced" tab for "Startup
   Variables", in the MySQL Administrator GUI.

2. PHP should have the "Gettext" extension enabled in addition to the =
   and "MySQL", without which, although the Setup goes through fine, we
   would still get a blank page while trying to do the first login.

Now, I am able run both 1.7.1 and 2.0.1 on my laptop!!

Thanks and rgds,
Apr 28

Manna – A story of Automated management by computers

A couple of days ago I read a story by Marshall Brain. I think Slashdot referred me to it (the first sign it was probably going to be a quality read) and after the first section I was hooked. It is either seven or eight chapters long and I read them all in one sitting. The story is about some software called “Manna” that was created to help manage employees who did easily micromanaged tasks. The software was deployed in a fastfood resturant with much success. Customers were happy, workers were happy, things were efficient. The rest of the story tells of what happens when manna’s usage is applied to additional domains as its capabilities are enhanced. I will include chapter one below in hopes of getting you hooked as well. Go check out the original source and all the additional chapters if you like it.

Here’s the first part:

Depending on how you want to think about it, it was funny or inevitable or symbolic that the robotic takeover did not start at MIT, NASA, Microsoft or Ford. It started at a Burger-G restaurant in Cary, NC on May 17, 2010. It seemed like such a simple thing at the time, but May 17 marked a pivotal moment in human history.

Burger-G was a fast food chain that had come out of nowhere starting with its first restaurant in 2001. The Burger-G chain had an attitude and a style that said “hip” and “fun” to a wide swath of the American middle class. The chain was able to grow with surprising speed based on its popularity and the public persona of the young founder, Joe Garcia. By 2010 the chain had 1,000 outlets in the U.S. and showed no signs of slowing down. If the trend continued, Burger-G would soon be one of the “Top 5” fast food restaurants in the U.S.

The “robot” installed at this first Burger-G restaurant looked nothing like the robots of popular culture. It was not hominid like C-3PO or futuristic like R2-D2 or industrial like an assembly line robot. Instead it was simply a PC sitting in the back corner of the restaurant running a piece of software. The software was called “Manna”, version 1.0*.

Manna’s job was to manage the store, and it did this in a most interesting way. Think about a normal fast food restaurant circa 2000. There was a group of employees who worked at the store, typically 50 people in a normal restaurant who rotated in and out on a weekly schedule. The people did everything from making the burgers to taking the orders to cleaning the tables and taking out the trash. All of these employees reported to the store manager and a couple of assistant managers. The managers hired the employees, scheduled them and told them what to do each day. This was a completely normal arrangement. In 2000, there were millions of businesses that operated in this way.

Circa 2000, the fast food industry had a problem, and Burger-G was no different. The problem was the quality of the fast food experience. Some restaurants were run perfectly. They had courteous and thoughtful crew members, clean restrooms, great customer service and high accuracy on the orders. Other restaurants were chaotic and uncomfortable to customers. Since one bad experience could turn a customer off to an entire chain of restaurants, these poorly-managed stores were the Achilles heel of any chain.

To solve the problem, Burger-G contracted with a software consultant and commissioned a piece of software. The goal of the software was to replace the managers and tell the employees what to do in a more controllable way. Manna version 1.0 was born.

Manna was connected to the cash registers, so it knew how many people were flowing through the restaurant. The software could therefore predict with uncanny accuracy when the trash cans would fill up, the toilets would get dirty and the tables needed wiping down. The software was also attached to the time clock, so it knew who was working in the restaurant. Manna also had “help buttons” throughout the restaurant. Small signs on the buttons told customers to push them if they needed help or saw a problem. There was a button in the restroom that a customer could press if the restroom had a problem. There was a button on each trashcan. There was a button near each cash register, one in the kiddie area and so on. These buttons let customers give Manna a heads up when something went wrong.

At any given moment Manna had a list of things that it needed to do. There were orders coming in from the cash registers, so Manna directed employees to prepare those meals. There were also toilets to be scrubbed on a regular basis, floors to mop, tables to wipe, sidewalks to sweep, buns to defrost, inventory to rotate, windows to wash and so on. Manna kept track of the hundreds of tasks that needed to get done, and assigned each task to an employee one at a time.

Manna told employees what to do simply by talking to them. Employees each put on a headset when they punched in. Manna had a voice synthesizer, and with its synthesized voice Manna told everyone exactly what to do through their headsets. Constantly. Manna micro-managed minimum wage employees to create perfect performance.

The software would speak to the employees individually and tell each one exactly what to do. For example, “Bob, we need to load more patties. Please walk toward the freezer.”

Or, “Jane, when you are through with this customer, please close your register. Then we will clean the women’s restroom.”

And so on. The employees were told exactly what to do, and they did it quite happily. It was a major relief actually, because the software told them precisely what to do step by step.

For example, when Jane entered the restroom, Manna used a simple position tracking system built into her headset to know that she had arrived. Manna then told her the first step.

Manna: “Place the ‘wet floor’ warning cone outside the door please.”

When Jane completed the task, she would speak the word “OK” into her headset and Manna moved to the next step in the restroom cleaning procedure.

Manna: “Please block the door open with the door stop.”

Jane: “OK.”

Manna: “Please retrieve the bucket and mop from the supply closet.”

Jane: “OK.”

And so on.

Once the restroom was clean, Manna would direct Jane to put everything away. Manna would make sure that she carefully washed her hands. Then Manna would immediately start Jane working on a new task. Meanwhile, Manna might send Lisa to the restroom to inspect it and make sure that Jane had done a thorough job. Manna would ask Lisa to check the toilets, the floor, the sink and the mirrors. If Jane missed anything, Lisa would report it.

I grew up in Cary, NC. That was a long time ago, but when I was a kid I lived right in the middle of Cary with my parents. My father was a pilot for a big airline. My mother was a stay-at-home mom and I had a younger sister. We lived in a typical four bedroom suburban home in a nice neighborhood with a swimming pool in the backyard. I was a 15 year-old teenager working at the Burger-G on May 17 when the first Manna system came online.

I can remember putting on the headset for the first time and the computer talking to me and telling me what to do. It was creepy at first, but that feeling really only lasted a day or so. Then you were used to it, and the job really did get easier. Manna never pushed you around, never yelled at you. The girls liked it because Manna didn’t hit on them either. Manna simply asked you to do something, you did it, you said, “OK”, and Manna asked you to do the next step. Each step was easy. You could go through the whole day on autopilot, and Manna made sure that you were constantly doing something. At the end of the shift Manna always said the same thing. “You are done for today. Thank you for your help.” Then you took off your headset and put it back on the rack to recharge. The first few minutes off the headset were always disorienting — there had been this voice in your head telling you exactly what to do in minute detail for six or eight hours. You had to turn your brain back on to get out of the restaurant.

To me, Manna was OK. The job at Burger-G was mindless, and Manna made it easy by telling you exactly what to do. You could even get Manna to play music through your headphones, in the background. Manna had a set of “stations” that you could choose from. That was a bonus. And Manna kept you busy the entire day. Every single minute, you had something that Manna was telling you to do. If you simply turned off your brain and went with the flow of Manna, the day went by very fast.

My father, on the other hand, did not like Manna at all from the very first day he saw me wearing the headset in the restaurant. He and Mom had come in for lunch and to say hi. I knew they were coming, so I had timed my break so I could sit down with them for a few minutes. When I sat down, my father noticed the headset.

“So”, he said, “they have you working the drive-thru I see. Is that a step up or a step down?”

“It’s not the drive-thru,” I replied, “it’s a new system they’ve installed called Manna. It manages the store.”

“How so?”

“It tells me what to do through the headset.”

“Who, the manager?”

“No, it’s a computer.”

He looked at me for a long time, “A computer is telling you what to do on the job? What does the manager do?”

“The computer is the manager. Manna, manager, get it?”

“You mean that a computer is telling you what to do all day?”, he asked.


“Like what?”

I gave him an example, “Before you got here, I was taking out the trash. Manna told me how to do it.”

“What did it say?”

“It tells you exactly what to do. Like, It told me to get four new bags from the rack. When I did that it told me to go to trash can #1. Once I got there it told me to open the cabinet and pull out the trash can. Once I did that it told me to check the floor for any debris. Then it told me to tie up the bag and put it to the side, on the left. Then it told me to put a new bag in the can. Then it told me to attach the bag to the rim. Then it told me to put the can back in and close the cabinet. Then it told me to wipe down the cabinet and make sure it’s spotless. Then it told me to push the help button on the can to make sure it is working. Then it told me to move to trash can #2. Like that.”

He looked at me for a long time again before he said, “Good Lord, you are nothing but a piece of a robot. What is it saying to you now?”

“It just told me I have three minutes left on my break. And it told me to smile and say hello to the guests. How’s this? Hi!” And I gave him a big toothy grin.

“Yesterday the people controlled the computers. Now the computers control the people. You are the eyes and hands for this robot. And all so that Joe Garcia can make $20 million per year. Do you know what will happen if this spreads?”

“No, I don’t. And I think Mr. G makes more than $20 million a year. But right now I’ve got two minutes left, and Manna is telling me that I need to move back to station 3 to get ready for the next run. See ya.” I waved at Mom. Dad just stared at me.

The tests in our Burger-G store were surprisingly successful. There were Burger-G corporate guys in the restaurant watching us, fixing bugs in the software, making sure Manna was covering all the bases, and they were pleased. It took about 3 months to work all the kinks out, and as they did the Manna software totally changed the restaurant. Worker performance nearly doubled. So did customer satisfaction. So did the consistency of the customer’s experience. Trash cans never overfilled. Bathrooms were remarkably clean. Employees always washed their hands when they needed to. Food was ready faster. The meals we handed out were nearly 100 percent accurate because Manna made us check to make sure every item in the bag was exactly what the customer ordered. The store never ran out of supplies — there were always plenty of napkins in the dispenser and the ketchup container was always full. There were enough employees in the store for the busy times, because Manna could accurately track trends and staff appropriately.

In addition, Burger-G saved a ton of money. In 2010, Burger-G had just over 1,000 stores in the United States. Manna worked so well that Burger-G deployed it nationwide in 2011. By 2012 Burger-G had cut more than 3,000 of its higher-paid store employees — mostly assistant managers and managers. That one change saved the company nearly $100 million per year, and all that money came straight to the bottom line for the restaurant chain. Shareholders were ecstatic. Mr. G gave himself another big raise to celebrate. In addition, Manna had optimized store staffing and had gotten a significant productivity boost out of the employees in the store. That saved another $150 million. $250 million made a huge difference in the fast food industry.

So, the first real wave of robots did not replace all the factory workers as everyone imagined. The robots replaced middle management and significantly improved the performance of minimum wage employees. All of the fast food chains watched the Burger-G experiment with Manna closely, and by 2012 they started installing Manna systems as well. By 2014 or so, nearly every business in America that had a significant pool of minimum-wage employees was installing Manna software or something similar. They had to do it in order to compete.

In other words, Manna spread through the American corporate landscape like wildfire. And my dad was right. It was when all of these new Manna systems began talking to each other that things started to get uncomfortable.

Next Chapter (at MarshallBrain.com)

Feb 22

Give Them what they want; not what they ask for

I was reading this article on slashdot the other day and it occured to me how often I made a particular mistake when I first started programming. I created a perfectly sound little app for someone and then when they complained I modified it to what they asked for. But that is often the wrong reaction. What we should really do is determine what the real problem is and how to address it. Sometimes the user that is complaining has enough power to force you against you will even if you know the “fix” they want is a bad idea, but often we have enough autonomy to be able to come up with a happy medium.

The example from the article was in a game environment. Some users would get stuck. They wanted a way to get a hint so they could move on to the next level. The fear of the game writer was that if he offered a hint the users would “give up” too quickly and just take hint after hint. The users asked for hints, but he thought it was a bad idea. The solution? Limited hints OR hints that take away points from your score OR hints only after a certain amount of time elapses. In any case it was possible to give the user what they wanted without really giving them what they asked for.

Jan 24

Are you living in a computer simulation?

This is a portion of a much longer document not written by me, but by NICK BOSTROM. Please visit his site by using the links he provided in the original work.




of Philosophy, Oxford University



version: May, 2001; Final version July 2002]

Published in Philosophical Quarterly
(2003), Vol. 53, No. 211, pp. 243-255.

document is located at http://www.simulation-argument.com] [pdf-version] [mirrored by PoolOfThought pdf-version]



This paper argues that at least one of the following propositions is true:
(1) the human species is very likely to go extinct before reaching a “posthuman” stage;
(2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof);
(3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations
is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.



Many works of science fiction as well as some forecasts by serious technologists and futurologists predict that enormous amounts of computing power will be available in the future. Let us suppose for a moment that these predictions are correct. One thing that later generations might do with their super-powerful computers is run detailed simulations of their forebears or of people like their forebears. Because their computers would be so powerful, they could run a great many such simulations. Suppose that these simulated people are conscious (as they would be if the simulations were sufficiently fine-grained and if a certain quite widely accepted position in the philosophy of mind is correct). Then it could be the case that the vast majority of minds like ours do not belong to the original race but rather to people simulated by the advanced descendants of an original race. It is then possible to argue that, if this were the case, we would be rational to think that we are likely among the simulated minds rather than among the original biological ones. Therefore, if we don’t think that we are currently living in a computer simulation, we are not entitled to believe that we will have descendants who will run lots of such simulations of their forebears. That is the basic idea. The rest of this paper will spell it out more carefully.

Apart from the interest this thesis may hold for those who are engaged in futuristic speculation, there are also more purely theoretical rewards. The argument provides a stimulus for formulating some methodological and metaphysical questions, and it suggests naturalistic analogies to certain traditional religious conceptions, which
some may find amusing or thought-provoking.

The structure of the paper is as follows. First, we formulate an assumption that we need to import from the philosophy of mind in order to get the argument started. Second, we consider some empirical reasons for thinking that running vastly many simulations of human minds would be within the capability of a future civilization that has developed many of those technologies that can already be shown to be compatible with known physical laws and engineering constraints. This part is not philosophically necessary but it provides an incentive for paying attention to the rest. Then follows the core of the argument, which makes use of some simple probability theory, and a section providing support for a weak indifference principle that the argument employs. Lastly, we discuss some interpretations of the disjunction, mentioned in the abstract, that forms the conclusion of the simulation argument.