Wednesday, September 21, 2016

Do Self-Driving Cars Threaten 4.1M Jobs?

Some have postulated that Self-Driving Cars Threaten 4.1M jobs. The argument is that 4.1M bus drivers, taxi drivers, truck drivers, and people in the transportation business will lose their jobs and presumably not be able to find another line of work.
Personally, I tend to be somewhat skeptical of this type of headline. I do believe that the future the market will demand a more technically literate workforce - even in the C-suite. However, I do tend to agree that The Jobless Future Is A Myth.
4.1M driving jobs may disappear, but when the price of delivering items drops dramatically, how many more specialized opportunities will there be to deliver food and other niche items will come in to replace the jobs lost. I think of it like Walmart allegedly killing "mom-and-pop" stores but then helping clear the way for Etsy before starting to feel the pressure from Amazon. I would argue that the work the mom-and-pop stores did is still happening, just in different places. It seems to me more like the market getting more efficient over time rather than impending doom on the horizon.
Machines can do a lot of things, but I'm not convinced that permanently putting people out of work is one of them.

Monday, August 29, 2016

Uber vs. Ralph Waldo Emerson

There's an old saying "Build a better mousetrap, and the world will beat a path to your door." Whether it's a direct quote or a paraphrase of Ralph Waldo Emerson is immaterial - I'm convinced that the quote is pure B.S.

Exibit A: Uber is starting to use self-driving cars


For the sake of completeness, I'll say that there probably isn't enough data yet to definitively answer whether self-driving cars are safer than their human-driven cars. Still, I personally feel like one could make a strong anecdotal case that there is a lower rate of fatalities among self-driven cars.

I personally think Uber's approach to the problem is brilliant. Instead of spending buckets of cash on advertising and putting the statistics out there, they're spending it to give free rides using the technology. Uber customers get to opt-in to the service, the rides are free, and they come with a driver who is only there to override the car in the event of an emergency. 

This has several key benefits:
  1. It helps Uber to improve their autonomous driving technology by testing it and gathering more training data. This is something that all players in this space will need a lot of, and Uber stands to gain a huge lead over their competition. 
  2. It replaces fear of the technology with concrete experience - probably among tech savvy early adopters who are the most likely to talk to their friends about it. 
  3. It validates the market in a way that nobody has accomplished yet. 
I applaud Uber for their initiative. Whether the technology is safer or not, if they waited for people to beat a path to their door, they'd likely be waiting for a while.

Thursday, August 4, 2016

The "Unmanned" Navy Submarine Program

The United States Navy has an unmanned submarine nearing deployment. This is a great example of how AI can augment but not completely replace a human workforce.
These submarines are designed to be low-cost surveillance/anti-sub systems. They can travel thousands of miles and stay at sea for months. Unlike their human-operated counterparts, they don't have to come back for food every few months. Theoretically, if enough of them were deployed and they didn't need to move around much, one of these subs could stay out at sea for years without refueling.
Then there's the cost factor. The goal is to produce the new subs for $20M each. That's a good amount of money, but it's roughly 1% of the $1.7B each that the Navy agreed to pay for their next 10 Manned Nuclear Submarines
For the cost of a single Virginia class submarine, the Navy could operate an entire fleet of approximately 100 unmanned subs controlling activity across an entire ocean theater.
On the other hand, while mostly autonomous, the fleet still needs human operators to make deployment decisions and process the intelligence provided. That need isn't likely to change. Note that the new subs also are not part of the nuclear deterrent. 
While the U.S. Navy is leveraging automation in a big way, they're still going to need human staff. However, the Navy of the future may include fewer sailors and more intelligence analysts. 

Tuesday, August 2, 2016

Robots Can... Sing And Dance?

Surely, the arts are safe and will never be automated. A machine could never possibly have a soul or feelings and could never possibly relate to art the way that a human can. Therefore, it must obviously follow that a machine could never create art the way that humans can.
Well... That may be premature. In China, a team of 1000+ synchronized dancing robots just broke the world record. This is not the first attempt at robotic dancing
Similarly, AI algorithms are becoming increasingly adept at composing music
Masters at human dancing/music usually spend a lifetime studying, perfecting, improvising, and re-combining a relatively small number of basic movements in new and creative ways. On the other hand, AI can (with enough $$, development expertise, and computing power) in mere moments observe the last 30 years of music/dance, figure out which songs/dances were the most effective/successful/lucrative for a given purpose (usually topping the charts/selling tickets), and combine the different movements into a unique number/composition.
This is not to say that machines will completely replace human musicians/dancers. A century after the invention of the automobile there are still horses selling for millions of dollars. On the other hand, in modern society motor vehicle theft anecdotally seems to be more common than horse theft.

Monday, July 11, 2016

Robot Ends Terrorist Threat With A Bomb

President Obama has officially made history for the number of days an American President has ordered flags to be flown at half staff. Sadly, as of late it seems that many human beings have given in to utter cowardice as they chose to visit hate, murder, and evil on society at large, and my sincere condolences go out to those affected by these incidents. 
Terrorism isn’t new, but the way that police responded to the latest threat may be. This time, while the terrorist was busy being a terrorist, the Dallas Chief of Police David Brown decided to use a robot to avoid risking further loss of life. The Dallas Police strapped a pound of C4 to a robot, guided the robot toward the terrorist, and detonated the bomb to end the threat.
The implications here are far reaching. In addition to bombs, similar drones are capable of carrying tasers, water cannons, and possibly guns. What’s more, Even the most limited-government-minded of federal politicians have discussed using drones to drop bombs domestically to accomplish similar things. 
While I offer no opinion on the use of such a robot in this case, I will say that if I were given the task of stopping the shooter myself, I could see how the idea of sending a robot instead would seem very appealing. I will also say that now seems like a great time to have a civil debate on the limits of when robots should be used in such situations.
The robots in this case were human controlled every step of the way, and that brings with it a certain level of ethical debate. If AI/The Singularity/Skynet is even possible, I still hold that it's not imminent. A more likely scenario for the next 5-10 years seems to be one where the robots of tomorrow could be a blend of AI and human control.
In other words, what if a robot were sent into a room with multiple terrorists holding hostages, and police were tempted to hand off part of the targeting process to the machine? That scenario may not be as far-fetched as it sounds, and it would bring with it an entirely different level of ethical debate.

Wednesday, June 29, 2016

AI finally catches up to… cows?

Have you heard the news? Super-intelligent machines are plotting to take all of our jobs during breakfast before deciding over their lunchtime soup and salad whether or not they want to kill us all!
What’s that you say? You want proof - or at least a tiny shred of evidence? Well look no further than this: A "learning robot” managed to escape from it’s fenced-in training area. Surely, this is nothing less than Skynet in the making!
Well. Maybe not quite… You see, the engineer kinda forgot to close (much less lock) the gate. The robot, like any decent overgrown Roomba vacuum cleaner, eventually found it’s way out of its pen and went cavorting around the town... until its battery died while it was in the middle of a nearby street. And then to prove that it wasn’t a fluke, the robot escaped... AGAIN!  (No word on whether or not the gate was closed the second time.)
I confess that I’m being a bit facetious here. I will say that I do believe that technology/AI/automation is reaching a level of sophistication that will eliminate a lot of jobs. I could even see a very real potential for not-quite-smart-enough AI in charge of cars or anti-aircraft lasers causing a real and very serious danger to people. Even more so, it appears that AI may take over defending our skies in the not-too-distant future. 
On the other hand, before I’ll believe that Skynet is coming for me, I’d like to see evidence that robots are doing more than escaping from their pens similar to how cowssheep, and dogs have been giving their owners the slip and even stopping traffic for millennia. There’s at least one 2,000 year old parable in The Bible of sheep performing similar feats, and I imagine there’s other similar stories in ancient literature/historical accounts. 
I’m ready to say that there’s plenty of prior art to go around on this one. 

Monday, June 27, 2016

AI Flight Simulator Defeats 'Top Gun’ Pilots

In a triumph of AI, scientists have built a flight simulator that can consistently beat the best human pilots - even when the humans are given superior [virtual] aircraft. What’s more, the AI used only a $500 low-end consumer grade computer. 
This is a significant advance in technology. It’s sure to be replicated by military powers all over the world, and I’ll go out on a limb and say that it greatly increases the odds that one day human fighter pilots will be mostly [if not completely] replaced by drones. 
How is this done? Well, first let’s talk about how modern “dogfights" work. Basically, they don’t - and they really haven’t since Vietnam. Aircraft combat today has a 100-mile range and is called “Beyond Visual Range” combat.  In fact, United States General Normal Schwarzkopf was quoted as saying “During the first three days of the war, when control of the air was greatly contested, what it basically amounted to was the Iraqi aircraft would take off, pull up their landing gear, and blow up.”
That said, air combat is something that can be essentially be reduced to a math problem. There are only so many possible changes in speed and direction that a pilot can make, and BVR combat becomes a math problem. In other words, you want to get within range, fire a missile, and get out in a way that makes the other guy go boom without your plane going boom. Computers are better at such math than people are. 
I would say that this does raise a very legitimate moral/legal/ethical debate about whether/how to let machines make kill decisions - and despite my personal preferences, I could see legitimate arguments on both sides of the issue. I wouldn’t trust this machine to make any sort of decision on who to go to war with, and I don’t think it means that Skynet or the Singularity is imminent. 

Tuesday, June 14, 2016

Umpires, Farmers, and Ranchers

Question: What do a baseball umpire, a farmer, and a rancher all have in common?
Answer: all three have repetitive jobs that can be automated and done by machines that don’t get tired and don’t care much about work-life balance.
Complaining/protestingtrying to hold back the technology, and/or demanding an increase in the minimum wage don’t seem like solutions that scale very well.
On the other hand, learning enough about technology, programming, and/or AI so that you can solve problems yourself does seem like it would scale... Unless of course you agree with Charles H. Duell, who famously claimed (in 1899) that "Everything that can be invented has been invented."

Wednesday, May 11, 2016

Is Artificial Intelligence Racist?

20 years ago, Microsoft launched Clippy,  the AI office assistant that became a lightning rod of scorn for the company. Not resting on their laurels, 20 years later, Microsoft decided to demonstrate it’s AI prowess, this time by releasing a “chatbot” capable of carrying on conversations.
Unfortunately, the chatbot began making racially inflammatory comments - to the point that Microsoft found itself in the somewhat familiar place of shutting an AI project down and apologizing for it. 
But what went wrong? Is AI really racist? 
The answer is “It depends.” Think of AI as similar to the immature child prodigy whom you know could easily pick the locks on your house or reprogram your cars on-board computer. You’re somewhat nervous about the kid getting in with the wrong crowd and using his abilities to do bad things before they develop some sort of moral restraint. 
Put another way, AI depends on data. To train a chatbot to have conversations, one very logical way would be to build a massive database of conversations, compile a bunch of questions/statements, and then catalog the most common responses. Therefore, if the chatbot reads a lot of Arnold Schwarzenenegger movie scripts, then it will likely be pre-disposed to violent tendencies and telling people “I’ll be back". If the database of conversations tends to have a lot of dialog that could be considered racist, then the chatbot’s responses will be racist as well. 
Along similar lines, many large data sets depend on smartphones, or devices like the apple watch. It’s not hard to imagine that such devices could more likely to be owned by wealthy people. Thus data sets built using such technology could represent skew/bias toward the wealthy, and AI built on top of such data sets could in turn “discriminate" against the less fortunate in a society.
So is AI racist? Yes, no, and maybe so.

Monday, May 9, 2016

AI "Accurately" Predicts Your Death

If you’re looking for an incredible example of journalists over-hyping the capabilities of AI (sometimes referred to by it’s synonym of  “Big Data”), you need look no further than this article talking about AI predicting death “Accurately.” 
You can rest assured that a bunch of geeks in a lab somewhere haven’t used a computer to crack the code of time and space. And no, it can’t tell you with certainty whether or not you’re going to die next Tuesday at 3:52pm local time. Dissecting this one takes some critical thinking and it helps if you have some experience with predictive analytics. 
First of all, the writer/editor are having a bit of fun here being somewhat loose with the definition of the word “accurately.” Webster defines accurate as “Free from error or defect; Consistent with a standard, rule, or model; precise; exact.”  The definition is vague, because the same word can be used to mean either “perfect” or “right more than half the time.” When you deal with predictive analytics on a large scale, the results are almost never perfect. Hence, the article may be guilty of using the accurate to mean “right more than half the time", but gleaning disproportionate amounts of attention and web traffic from people who read accurate to mean nearly “perfect."
A second convenient detail about the article is the nature of the prediction. Given enough time, there's a 100% probability that everyone alive today will die. So if I build a model to predict death, I can cheat and always predict “Yes, you will die” and I’ll be right… eventually.
Once the hype is peeled back a bit, you can look at what the article is really saying and make a strong case that this is really nothing new. Actuaries  (a.k.a. “Data Scientists”) have known for decades (centuries?) that while you can’t predict specifically what will happen with any one person, you can make reasonable assumptions for a group of people. It’s effectively an example of the Central Limit Theorem, one of the core principles underpinning modern statistics. In fact, the math has long been accurate enough to underpin some very valuable companies that sell life insurance
Mark Twain famously said that "there are three kinds of lies: lies, damn lies, and statistics.” This article feels like yet another example of how slight tweaking of definitions and statistical models can really go a long way toward distorting reality.

Wednesday, May 4, 2016

When Machines Fail

For all the promise of technology making our lives easier and threats of machines taking away our jobsmachines aren’t perfect  at performing either function. One thing people are good at that machines… aren’t good at… is knowing what to do when something goes wrong.
Whatever the application, whether a function is performed by a machine or a human, something will go wrong. Period. Full Stop.
The next question is who/what will be there to pick up the pieces in a way acceptable to all parties involved. To my knowledge, nobody’s even tried to automate that one yet.

Monday, May 2, 2016

What Machines Are Good At

I had a college professor who frequently quipped that "a computer is a very fast moron.” I don’t think he originated the quote, but I do think it is very instructive.
When you hear computer CPUs described as "1.8 Ghz” what that really means is that the computer is (theoretically) capable of doing a math problem 1.8M times every second. A dual core processor can do 3.6M math problems in a second, and a quad core processor can do 7.2M math problems in a second. Suffice to say that computers do math a lot faster than people do.
When you think about it, a lot of things in life can be modeled as a math problem. The morning drive to the office, a game of chess, the trajectory of a missile, preparing dinner, and even large parts of human language can be modeled reasonably accurately as math problems. Given enough time and money spent on defining the rules of a problem, many things can very consistently be done better by a computer than a human.
Another advantage that computers have is that they are amazing at learning in parallel. If a human wants to learn something, he/she has the fundamental limitation of only having 24 hours in a day minus eating and sleeping. A machine on the other hand can divide and conquer. If a team decides one day that they want to do something crazy like organize the world’s information and make the best of human knowledge available to anyone instantaneously, they can spin up a few million computers and digest all of the worlds information a few times a day to account for the continually growing/changing nature of the global conversation. 
In short, computers are very good at an ever-increasing number of tasks that can be broken down either into data sets of things/events that have come before or a set of pre-defined rules. That said, computers haven’t learned how to define the rules of the game yet.

Wednesday, April 27, 2016

McDonalds, Automation, and the $15 Minimum Wage

It's time for a thought experiment. I’ll pick on McDonalds because they’re big and they happen to be in the headlines at the moment. Those feeling adventurous enough could continue the same logic to nearly any restaurant franchise. 
Assume for a moment, you’ve decided that you want to own your own set of golden arches, and you decide to open up your own McDonalds Franchise. For the sake of round numbers, we'll say that’ll cost you a cool $1M just to open the doors. Most people don't have a spare $1M sitting in their couch, so chances are that you borrowed some/most/all of that... from bankers... with lawyers... Said bankers will eventually want their money back, and there's a chance that they may hire someone to get cranky on their behalf if you don't keep up your end of the deal.
It’s a very expensive operation to be sure, and many franchises fail.  Add to that, the knowledge that on the whole, McDonalds has struggled at times. However,  after McDonalds changed CEOs,  they tried changing a few things, and it looks like they’re heading back into growth territory.
Now the middle class in the United States is being squeezed. You know that some portion of the middle class is currently running your $1M restaurant investment. Your employees' cost of living is going up, while their wages are stagnant or dropping. Looking for a solution to the problem, many in the middle class begin protesting for a higher minimum wage. While you don’t mind the idea of people getting ahead and making progress in life, the issue is that your customers may very well leave if you raise your prices, and your profit margin may disappear  if you raise wages that much. That could leave you broke, laying off all of your employees with a lot of employees/bankers/lawyers angry with you.
Looking for a solution, you hear about this machine that can make more than 300 burgers an hour and do it better than people can. What would you do?
Extra credit for those with strong stomachs: Redo the experiment as the CEO trying to keep 35,000 of the aforementioned franchise owners in business while trying to keep shareholders (similar to the bankers with lawyers) happy. You know very well that you're only one bad earnings report away from being the next former CEO of McDonalds.

Monday, April 25, 2016

What People Are Good At

Machines are becoming proficient at an increasing number of tasks that were previously the domain of humans.
From Cooking to running to playing chess to driving, Machines are doing so much these days that I often ask myself "what am people good at that machines aren’t?"
Machines are very good at following rules. As of yet, they aren’t nearly as proficient at creating the rules. 
Given enough investment and effort, many aspects of everyday business will likely be automated, beginning with the simplest, most time consuming parts of the organization. This effect could potentially cause great economic disruption.
Despite great strides at voice and image recognition, humans are still superior at more abstract pattern recognition. Said another way, I’ve yet to see an AI capability that could be construed as seeing a real-world unsolved need and assembling/creating tools to fill that need. That said, CEOs, and senior leadership roles in companies would theoretically be more difficult to automate than the roles of the people who report to them. Along those lines, programming is a method of creating and expressing rules, and to that end, people with programming experience (leaders or otherwise) could theoretically also make themselves more resilient to automation.
The ability to be flexible and continually learn new skill sets seems like it could prove very valuable in a world where things are continually being automated.

Monday, April 11, 2016

Donald Trump and Bernie Sanders May Have This in Common

By nearly all accounts, the 2016 American election process has been a "black swan event" in that it hasn’t included a lot of the patterns typically displayed in the process of picking an American President.
Baffling the pundits and oddsmakers, Donald Trump and Bernie Sanders both defied the leaders of their newly adopted political parties. They’ve both assembled a following around the shared idea of protectionism. Donald Trump makes it a point to say that we’re going to build a wall to keep Mexicans on their side of the border and then follows up saying that we’re going to beat China at the trade game. For his part, Bernie Sanders picks a slightly different target. Bernie instead chooses to draw the lines of protectionism a bit closer to home, as he drops accusations of greed and blames our neighbors and friends who happen to be CEOs and executives for the plight of the middle class and working poor.
The frustration that Mr. Trump and Mr. Sanders are both channeling seems to be rooted in some degree of economic hardship. What if the future of that hardship doesn’t originate in Mexico, China, or even in corporate board rooms? What if at least some of the frustration is due to jobs being automated on our own soil by technology implemented by companies simply attempting to stay competitive in the marketplace?
Could it be that Donald Trump and Bernie Sanders are each giving different names to the same boogieman: Computer Software?

Friday, April 1, 2016

Ultra-Intelligent Machines May Not Be The Immediate Concern

For the better part of a century, science fiction (I, RobotTerminatorThe Matrix, etc.) has related stories about ultra-intelligent machines that become smarter that people and do some combination of try to kill everyone, take over, and/or treat people as pets. 
I won't say that a scenario like that will never happen. Repeated experiments have pushed my personal perception of what machines can do. However, given how often Siri screws up my questions - I will say that I think there's other concerns may be more likely to happen first.
A bunch of smart people wrote an open letter to warn about the dangers of ultra-intelligent AI, although if you read the article closely, Bill Gates commented that he thought we were “a few decades” from ultra-intelligent machines. 
On the other hand, militaries by definition are constantly trying to one-up their adversaries. As weapon systems continue to advance, they increasingly become complex enough that they require some level of artificial intelligence to control them. 
With the military more or less attempting to Keep up with the Joneses, it will be increasingly difficult not to deploy these weapons systems. Indeed, if my city were faced with a credible threat, I’m not sure I’d argue against deploying AI-controlled anti-projectile lasers to maintain safety. 
To date, the lack of a major news story on the topic makes it seem like these machines have all been rigorously tested. However, the more these systems are deployed, the more it increases the chances of an accident. Think of what would happen if every time Siri messed up, a passenger jet fell out of the sky.
In the near-term at least, it seems to me that under-intelligent machines would be more likely to cause problems than ultra-intelligent machines. 

Tuesday, March 29, 2016

Robot Makes First Grocery Delivery

As part of a previous blog post the question was raised of when Robots would begin delivering groceries and turning those into meals. Well, it didn’t take long for the first robot deliveries part to happen.
First, an airborne drone has made the first known home-delivery in a remote American town in the state of Nevada. Now, the founders of Skype have built a prototype robot that delivers up to 2 bags of groceries in the UK. 
The typical pattern for new technologies is that they start small, and then they improve in proportion to the amount of people paying to use the good or service. If and when this starts to scale, Wal-Mart and the other big grocery chains will have no choice to take notice of the little up-starts and try to adjust to compete. One can't help but wonder what that would mean for Wal-Mart's 2.2M total employees. Workforce adjustments could be necessary to keep as many of those 2.2M people working as possible. (I can't speak for everybody, but I personally don't fault CEOs for wanting to keep their own jobs too).  
I don’t know an exact timeline of when trips to the grocery store will end. If the technology does prove out, I’d guess it’d take a decade or more for any transition to happen completely. Still, if I were depending on a grocery store for my livelihood, I’d be nervous enough to start looking for a new career now rather than waiting and betting against the many many startups fighting tooth and nail to be the first ones to get this right.
If it were me, I’d start looking at programming courses on Coursera or Udacity to begin thinking about a career path that would be harder to replace.