Nov 18

Roger Goodell Does The Impossible – Proves that Unions Are Necessary

I’m an NFL football fan. But I don’t have to be, and at this moment I’m not sure I want to be due to Roger Goodell’s handling of Adrian Peterson. If you read my post on the recent announcement of Apple’s CEO then you’ll know I don’t like to support companies who have leaders who do pretty major things that make me sad. Whether they make me sad by being unfair to those under their power or by being blatantly dishonest about important matters – either way they’ve broken a trust. It’s my money and my time, so why use it to support something that makes me sad?

Roger Goodell has managed to do something nobody else has been able to do. Something I never thought was even possible. Something that the party that leans more to the left hasn’t been able to do in my 35+ years. Something my college professors couldn’t do. Something my wife’s dad who was vice president in the fire fighter union couldn’t do. Goodell’s actions have convinced me of something probably every conservative would agree is a terrible idea, which is funny because I’m pretty sure that Goodell walks on the conservative side of the political line. In any case, his decisions and actions have managed to convince me of something conservatives have fought hard to discredit. He’s convinced me that unions are still important and necessary.

Now, to be honest, all this has done it upgrade unions in my mind from “completely unnecessary and counterproductive” to a slightly more elevated state of “needed and a necessary evil“, but I honestly never thought I’d see the day that such a thought would even enter my mind. What Goodell has done to Adrian Peterson is reprehensible and inexcusable. The arbitrary punishments doled out in the face of pressure from some fans and some advertisers are understandable, but still show signs of weakness on the commissioners part. I’ve yet to see him make a proactive move in any of his dealings, and his reactive ones have only come after he’s been forced into a corner. Now he’s managed to back someone else into a corner, and even though I didn’t think I’d ever say it, I’m glad there’s a union there to back up his victim. As USA Today’s Nancy Armour put it Goodell hasn’t done anything here other than try to protect what he considers to be his turf and ended up just picking a fight with the union.

If the union really wanted to impress they’d convince players to hold out on the Thanksgiving day games coming up next week. Hell, I bet the union has enough in the coffers to write a game check to every player for that missed week. That would be glorious! I could live
with it and it would probably further strengthen my new found slightly positive opinion toward unions. Of course, they might not have to. The union might win it’s grievance hearing and Peterson will be allowed to play sooner than next year. Maybe that’s Goodell’s plan anyway. This way he can act like he tried to get rid of Peterson but the arbitrator was convinced by the big, mean union and forced his hand to let
Peterson play again this year. Maybe that’s the plan. But it shouldn’t have come to that and it frustrates me that, if that is his plan, he simply proved that unions are necessary in order to protect individuals from weak leaders.

Roger Goodell has made me sad today, but maybe I should thank him (and the other owners for continuing to support him) for opening my eyes to the truth. As long as there are leaders like Goodell, there will be a need for organized protection of workers rights. I’d still rather just get rid of the bad leaders, but even if that were the plan there would need to be some mechanism for raising awareness about bad leaders when they slip through the cracks, and I’m not sure I can think of another more functional mechanism for that very task than a union…

I guess unions are probably a little bit like firearms. They can be used wrong and can cause all kinds of damage if whoever is controlling them doesn’t make good decisions and doesn’t pay close attention to where they’re pointing their power. It’s best to not wield the power at all if possible. But in the end, it’s probably better to have one and not need it than it is to need one and not have one.

Jun 09

Tennessee Conceal and Carry in Restaurants

Late last week the Tennessee legislature passed a law that will allow those with a carry permit to bring their weapons into restaurants that serve alcohol. Previously, this was against the law. There are some caveats though. The person carrying is not allowed to drink any alcohol at all (that is definitely reasonable). As always, signs posted by the restaurant owners can be used to prevent the legal carrying of weapons onto the property (those signs still won’t prevent illegal weapons though).

At the current point and time I’m glad that the law has been modified. I live in Bartlett, close to Memphis, and Memphis is a high crime area. I carry everywhere I go. I occasionally like to go out and eat with my wife and kids. Previously, I had to take some pretty big chances.

I could:

(1) Take my firearm in to the restaurant and hope it wasn’t spotted. If I was it could cost me a hefty fine or possibly the loss of my firearm and my permit.

(2) Leave my firearm in my car and pray that it was not broken in to while I was inside

(3) Leave my firearm at home

(4) Skip eating out altogether

Option (1) kept my family and I safe for the entire dining experience. Option (1) and (3) kept the gun from being stolen. Option (2) kept us safe while we were in the car, but not going between the car and the restaraunt (the most likely time to get robbed, etc) and it also put the gun itself at risk.

With these things in mind I am very pleased with the updated law. I know I will eat out more knowing I won’t have to leave my firearm that I almost always carry in my car. I don’t shop at stores where signs are posted that firearms are not allowed and I have no intentions of eating at such an establishment either. I know others feel equally strong in the opposite direction on this subject and all I can say is that as a responsible carry permit holder I strongly believe I am in the right on this one.

I read this comment the other day and thought it was about as true as it gets:

“… and yes if you sat next to me and my kids in McDonalds in the playground area, you were within 10 feet of a gun and never knew it. truthfully that was as safe as you and your kids could be without a police officer at the next table over.”

May 30

Robots in War (2)

This is another article I read recently about robots in wars. It is more philosophical in nature so keep that in mind as you read. Please consult the full text of the article for more. I really like the anecdotes about the swiss army accidentally invading Liechtenstein and how a British Army Platoon accidentally invaded a Spanish beach. Would these problems be eliminated with a Robot army? See the article for an answer.

I will be trying to dig up the paper itself (Araro’s “How Just Could a Robot War Be?”) and will also be doing some research into exactly what defines a “Just War”. I’ll post my findings at a later date.

I’ve included two long excerpts from the article below. Enjoy, and please consult the original article as it is quite well done.

In a fascinating paper entitled “How Just Could a Robot War Be?”, philosopher Peter Asaro of Rutgers University explores a number of robot war scenarios.

Asaro imagines a situation in which a nation is taken over by robots — a sort of revolution or civil war. Would a third party nation have a just cause for interceding to prevent this?

Asaro concludes that the use of autonomous technologies such as robot soldiers is neither “completely morally acceptable nor completely morally unacceptable” according to the just war theory formulated by Michael Walzer.

Just war theory defines the principles underlying most of the international laws regulating warfare, including the Geneva and Hague Conventions. Walzer’s classic book Just and Unjust Wars was a standard text at the West Point Military Academy for many years, although it was recently removed from the required reading list.

Asaro asserts that robotic technology, like all military force, could be just or unjust, depending on the situation.

h+: We’re using semi-autonomous robots now in Iraq and, of course, we’ve been using smart bombs for some time now. What is the tipping point – at what point does a war become a “robot war”?

PETER ASARO: There are many kinds of technologies being used already by the U.S. military, and I think it is quite easy to see the U.S. military as being a technological system. I wouldn’t call it robotic yet, though, as I think there is something important about having a “human-in-the-loop,” even if the military is trying to train soldiers to behave “robotically” and follow orders without question.

I think there is always a chance that a soldier will question a bad order, even if they are trained not to, and there is a lot of pressure on them to obey.

Ron Arkin is a roboticist at Georgia Tech who has designed an architecture for lethal robots that allows them to question their orders. He thinks we can actually make robots super-moral, and thereby reduce civilian casualties and war crimes.

I think Ron has made a good start on the kinds of technological design that might make this possible. The real technical and practical challenges are in properly identifying soldiers and civilians.

The criteria for doing this are obscure, and humans often make mistakes because information is ambiguous, incomplete, and uncertain. A robot and its computer might be able to do what is optimal in such a situation, but that might not be much better than what humans can do.

More importantly, human soldiers have the capacity to understand complex social situations, even if they often make mistakes because of a lack of cultural understanding.

I think we are a long way from achieving this with a computer, which at best will be using simplified models and making numerous potentially hazardous assumptions about the people they are deciding whether or not to kill.

Also, while it would surely be better if no soldiers were killed, having the technological ability to fight a war without casualties would certainly make it easier to wage unjust and imperial wars. This is not the only constraint, but it is probably the strongest one in domestic U.S. politics of the past 40 years or so.

By the way, I see robots primarily as a way to reduce the number of soldiers needed to fight a war. I don’t see them improving the capabilities of the military, but rather just automating them. The military hold an ideal vision of itself as operating like a well-oiled machine, so it seems that it can be rationalized and automated and roboticized. The reality is that the [human] military is a complex socio-technical system, and the social structure does a lot of hidden work in regulating the system and making it work well. Eliminating it altogether holds a lot of hidden dangers.

h+: You talk about the notion that robots could have moral agency – – even superior moral agency –- to human soldiers. What military would build such a soldier? Wouldn’t such a solider be likely to start overruling the military commanders on policy decisions?

PA: I think there are varying degrees of moral agency, ranging from amoral agents to fully autonomous moral agents. Our current robots are between these extremes, though they definitely have the potential to improve.

I think we are now starting to see robots that are capable of taking morally significant actions, and we’re beginning to see the design of systems that choose these actions based on moral reasoning. In this sense, they are moral, but not really autonomous because they are not coming up with the morality themselves… or for themselves.

They are a long way from being Kantian moral agents –- like some humans –- who are asserting and engaging their moral autonomy through their moral deliberations and choices. [Philosopher Immanuel Kant’s “categorical imperative” is the standard of rationality from which moral requirements are derived.]

We might be able to design robotic soldiers that could be more ethical than human soldiers.

Robots might be better at distinguishing civilians from combatants; or at choosing targets with lower risk of collateral damage, or understanding the implications of their actions. Or they might even be programmed with cultural or linguistic knowledge that is impractical to train every human soldier to understand.

Ron Arkin thinks we can design machines like this. He also thinks that because robots can be programmed to be more inclined to self-sacrifice, they will also be able to avoid making overly hasty decisions without enough information. Ron also designed architecture for robots to override their orders when they see them as being in conflict with humanitarian laws or the rules of engagement. I think this is possible in principle, but only if we really invest time and effort into ensuring that robots really do act this way. So the question is how to get the military to do this.

It does seem like a hard sell to convince the military to build robots that might disobey orders. But they actually do tell soldiers to disobey illegal orders. The problem is that there are usually strong social and psychological pressures on soldiers to obey their commanders, so they usually carry them out anyway. The laws of war generally only hold commanders responsible for war crimes for this reason. For a killing in war to truly be just, then the one doing the killing must actually be on the just side in the war. In other words, the combatants do not have equal liability to be killed in war. For a robot to be really sure that any act of killing is just, it would first have to be sure that it was fighting for a just cause. It would have to question the nature of the war it is fighting in and it would need to understand international politics and so forth.

The robots would need to be more knowledgeable than most of the high school graduates who currently get recruited into the military. As long as the war is just and the orders are legal, then the robot would obey, otherwise it wouldn’t. I don’t think we are likely to see this capability in robots any time soon.

I do think that human soldiers are very concerned about morality and ethics, as they bear most of the moral burdens of war. They are worried about the public reaction as well, and want to be sure that there are systems in place to prevent tragic events that will outrage the public. It’s not impossible to try to control robot soldiers in this way. What we need is both the political will, and the technological design innovation to come together and shape a new set of international arms control agreements that ensures that all lethal robots will be required to have these types of ethical control systems.

Of course, there are also issues of proliferation, verification and enforcement for any such arms control strategy. There is also the problem of generating the political will for these controls. I think that robotic armies probably have the potential to change the geo-political balance of power in ways far more dramatic than nuclear arms.

We will have to come up with some very innovative strategies to contain and control them. I believe that it is very important that we are not naive about what the implications of developing robotic soldiers will mean for civil society.

May 30

Robots in War (1)

The idea of using robots in a war type environment has always been there. Atleast it has always been there for those in the “Terminator generation”. However, those in the Starwars generation would have seen C3PO or R2D2 blasting away with lasers. Before that I’m sure there were other examples. So suffice it to say robots in wars is not a new concept. It is however reality. Check out this article for some of the details. It’s a pretty cool read. Some excerpts are below.

This first excerpt covers an actual malfunction of such technology.

A few minutes before nine in the morning, and the young soldiers have no idea of the horror that is about to strike them. They are taking part in a massive military training exercise, involving 5,000 troops, and are about to showcase the latest in robotic weapons technology.

The MK5 anti-aircraft system, with two huge 35mm cannons, is essentially a vast robotic weapon, controlled by a computer.

But while it’s one thing when your laptop freezes up, it’s quite another when it is controlling an auto-loading magazine containing 500 high-explosive rounds.

As the display begins, the South African troops sense quickly that something is terribly wrong. The system appears to jam – but what happens next is truly chilling.

‘There was nowhere to hide,’ one witness stated in a report. ‘The rogue gun began firing wildly, spraying high explosive shells at a rate of 550 a minute, swinging around through 360 degrees like a high-pressure hose.’

One young female officer rushes forward to try to shut down the robotic gun – but it is too late.

‘She couldn’t, because the computer gremlin had taken over,’ the witness later said.

The rounds from the automated gun rip into her and she collapses to the ground. By the time the robot has emptied its magazine, nine soldiers lie dead (including the woman officer).

Another 14 are seriously injured. The report will later blame the bloodbath on a ‘software glitch’.

It sounds like a blood-spattered scene from the new blockbuster Terminator Salvation, in which a military computer takes over the world using an army of robot soldiers.

But this bloodbath actually happened. And concern is mounting that it may happen again and again, as a growing number of military robots flood the battlefield.

And this one talks about the various sizes and potential issues that could happen.

‘Just look at the numbers,’ he says. ‘We went into Iraq in 2003 with zero robots. Now we have 12,000 on the ground. They come in all shapes and sizes, from tiny machines to robots bigger than an 18-wheeler truck.

There are ones that fit on my little finger and ones with the wingspan of a football field.’

The U.S. military is the biggest investor in robot soldiers. Its robot programme, dubbed Future Combat Systems, is budgeted to spend $240 billion over the next 20 years.

But Singer is worried that in the rush to bring out ever more advanced systems, many lethal robots will be rolled out before they are ready.

It is a terrifying prospect. ‘Imagine a laptop armed with an M16 machine-gun,’ one expert said.

According to Noel Sharkey, a professor of robotics and artificial intelligence at Sheffield University, one of the biggest concerns is that this growing army of robots could stray out of communication range.

‘Just imagine a rogue robot roaming off the battlefield and into a nearby village,’ he says. ‘Without experts to shut it down, the results could be catastrophic.’

There are robots that can move through sand and water. There are robots that can hover, Robots that can fly. Humanoid robots. There are robots that can, utilize a machine gun with the accuracy of a sniper shooting an apple from hundreds of meters. These robots can be armed with grenade launchers, machine guns, and rocket launchers. They’re not so smart, but they are good at what they are told to do.

Of course, as with any weapon technology there is fear it could fall into the wrong hands (assuming it started in the right hands to begin with). There is also fear of the robots making mistakes. For example, it might misidentify something as a threat. Can robots be made to understand the rules of engagement? These are questions that have to be dealt with and their consequences understood. But make no mistake, we the world have proceeded down this path. It is happening and hopefully we can keep it under control or atleast stay ahead of the curve.

I will leave you with this final thought from the article.

‘Body bags containing real soldiers coming home affect the government electorally,’ says Sharkey. ‘Once you start using robots, you remove this problem.’

But do we really want going to war to be as easy, and impersonal, as playing a computer game?

Mar 02

Pirate Bay case update and some related legal questions

I have been interested in the Pirate Bay trial that has been going on, but it has now taken on a whole new level of interest to me because they are releasing the actual arguments being utilized on each side. I was reading this article about the trial and read the following which kind of brought some questions to mind:

Roswall dropped several charges on the second day of the trial for the purpose of streamlining the case, Ars was told, which leaves contributory copyright infringement as the main charge. The Pirate Bay might not host content itself, but if its main use is as a middleman that arranges illegal peer-to-peer transfers, Roswall said that the site could be held responsible.

“A person who is holding someone’s coat while they assault someone else is complicit in the crime,” he said, according to Swedish paper The Local.

And Monique Wadsted, the lawyer for the movie industry, told the court that it was a basic point of Swedish law that one can’t just walk around with eyes closed when one knows that crimes are being committed.

Wadsted also claimed that The Pirate Bay was built for piracy, and she noted that site admins do in fact police the site for child pornography, inactive torrents, and misleading descriptions. Given that sort of control over the material, is it credible simply to see The Pirate Bay as a hand-off forum that allows all sorts of user postings for which it cannot be held liable?

The defense is continuing to claim that the European Union e-commerce directive passed in 2000 protects them from liability. The relevant part of the directive is Article 12, the “mere conduit” section, which says that a “service provider” is not liable for the information transmitted by its users.

The rule applies only to “service providers,” raising the question of whether The Pirate Bay qualifies, and it only applies when three conditions are met: the service provider must not 1) initiate the transfer, 2) select the receiver of the transfer, 3) modify the transfer in any way.

So what is it that I find interesting? All of it actually. What If someone wanted to start a service that helped drug dealers (not big pharma… the ones that are currently illegal) hook up with those that wanted to buy drugs. If they simply created a website that facilitated the two hooking up and took NO PROFITS from either party would the website be breaking the law? Would I be breaking the law if I DID get paid by advertisers? What if I took a cut of the transaction itself? The last one I think yes, but I’m not sure… the other two I lean towards “no”, but I’m not sure. IANAL – so what do I know? What if I didn’t know about the drug dealers? What if they were just using it to exchange illegal things and I didn’t know?

In the article the prosecuter claims “if [a website’s] main use is as a middleman that arranges illegal peer-to-peer transfers ” then it can be held liable for damages. I assume this <illegal peer-to-peer transfers> could be substituted to be anything that is <illegal>. Fine, what if it’s intended main use is as a chat room, but it just happens to provide a mechanism for pushers and buyers to find each other?

It seems kind of arbitrary for anyone other than the creator to define somethings “main use”. That’s like saying a car’s “main use” is to run over pedestrians just because it happens sometimes. Even if it happens a lot that is not it’s “main use”. It doesn’t really follow that I, the inventor or provider, can invent or provide a service for one use and that someone else call what I did illegal because the way some portion of society chooses to use it is in some other way than what I intended. My understanding is that is in general EXTREMELY difficult to provide “motive” or “intent”.

Is the service a “mere conduit” as defined? I don’t know. I’ve never used it. I make enough money to buy most of my own crap now, and I rarely listen to new music. I do hate for the Pirate Bay that they apparently did remove some material. At that point they actually might have changed their status from “mere conduit” to “data managers” or something like that.

Even for those not in Sweden this could set some huge precedent. I will have to check the American law for further clarification, but to be on the safe side I would say anyone wanting to create a website that allows people to get what they want (files, information, dates [think “dating”], whatever) should atleast make sure they meet the definition of “service provider”. The three requirements were (as listed in the article) that a service provider must:

  1. NOT initiate the transfer
  2. NOT select the receiver of the transfer
  3. NOT modify the transfer in any way

Does (1) mean you cannot be the sender? Or does (1) mean you cannot send to a receiver that has not solicited it? If it is the former, then it seems that (3) means you cannot delete something posted by another user because the sender just uses your platform. If someone else initiates the transfer (ie. A user) and you delete it that would be viewed as a modification. If that is true then it would be in a providers best interest to ignore any Cease And Desist letters, subpoenas, whatever for fear of violating their status as a provider and thus opening themselves up to even more litigation. This is crazy hard to figure out what to do.

Also, the rules do not say that a provider cannot profit. So going back to my drug dealer example it would seem a “service provider” would be able to profit as long as they didn’t skim any of the “product” or any of the money from the buyer. I am guessing that an additional part of being legally NOT LIABLE as a service provider would require that the item being pushed throught the “conduit” is not illegal. Thus, actually setting up a meeting between a user and a pusher would likely be illegal. (What if the site just said “I can recommend a guy” and let them work it out from there? I don’t know.)

This is why “data law” is so much harder (and more interesting) than other types of legal issues. It’s a relatively young area with a lot of gray area. Add on to that the fact that platforms on which it is practiced is always changing and it makes for some very interesting and provactive conversation opportunities.