This apparently didn't satisfy the woman whose conversation was recorded, according to the Mercury News:
Now her family has unplugged all the devices, and although Amazon offered to "de-provision" the devices of their communications features so they could keep using them to control their home, Danielle and her family reportedly want a refund instead.
When reached Friday, an Amazon spokeswoman would not comment about whether the company will issue a refund.
Other smart home speakers carry similar privacy risks. Last year, for example, Google had to release a patch for its Home Mini speakers after some of them were found to be recording everything.
The 'see-and-spray' robot goes from plant to plant, visually differentiating the actual crops and weeds, and squirting the weeds selectively and precisely with weed killer, as opposed to the current technique of using large quantities of weed killer like Monsantos' Roundup to spray entire crops.
Weeds are already becoming resistant to such glyphosate-based herbicides after "more than 20 years of near-ubiquitous use," reports Reuters. (The head of one pesticide company's science division concedes that "That was probably a once-in-a-lifetime product.") But AI-based precision spraying "could mean established herbicides whose effect has worn off on some weeds could be used successfully in more potent, targeted doses."
Meanwhile, another Silicon Valley startup has built a machine using on-board cameras to distinguish weeds from crops -- and was recently acquired by the John Deere tractor company. Reuters calls these companies the "new breed of AI weeders that investors say could disrupt the $100 billion pesticides and seeds industry."
The original submission asks: Should we welcome our weed-killing robotic overlords?
After wryly observing that Schmidt had just given the journalists in the audience their headlines, interviewer (and former Publicis CEO) Maurice Levy asked how AI and public policy can be developed so that some groups aren't "left behind." Schmidt replied that government should fund research and education around these technologies. "As [these new solutions] emerge, they will benefit all of us, and I mean the people who think they're in trouble, too," he said. He added that data shows "workers who work in jobs where the job gets more complicated get higher wages -- if they can be helped to do it." Schmidt also argued that contrary to concerns that automation and technology will eliminate jobs, "The embracement of AI is net positive for jobs." In fact, he said there will be "too many jobs" -- because as society ages, there won't be enough people working and paying taxes to fund crucial services. So AI is "the best way to make them more productive, to make them smarter, more scalable, quicker and so forth."
[...] Zimbabwe may be giving away valuable data as Chinese AI technologists stand to benefit from access to a database of millions of Zimbabwean faces Harare will share with CloudWalk. [...] CloudWalk has already recalibrated its existing technology through three-dimensional light technology in order to recognize darker skin tones. In order to recognize other characteristics that may differ from China's population, CloudWalk is also developing a system that recognizes different hairstyles and body shapes, another representative explained to the Global Times.
Further reading: Amazon Admits Its AI Alexa is Creepily Laughing at People.
Rather predictably, the technology has already been used to generate a number of counterfeit celebrity porn videos. But the method could also be used to create a clip of a politician saying or doing something outrageous. DARPA's technologists are especially concerned about a relatively new AI technique that could make AI fakery almost impossible to spot automatically. Using what are known as generative adversarial networks, or GANs, it is possible to generate stunningly realistic artificial imagery.
While most of Xiaoice's interactions have been in text conversations, Microsoft has started allowing the chat bot to call people on their phones. It's not exactly the same as Google Duplex, which uses the Assistant to make calls on your behalf, but instead it holds a phone conversation with you. "One of the things we started doing earlier this year is having full duplex conversations," explains Nadella. "So now Xiaoice can be conversing with you in WeChat and stop and call you. Then you can just talk to it using voice." (The term "full duplex" here refers to a conversation where both participants can speak at the same time; it's not a reference to Google's product, which was named after the same jargon.)
YouTube Music is free with ads, but will cost $9.99 for ad-free listening. There is also YouTube Premium, which will cost $11.99 per month, and will include both the ad-free music service and the exclusive video content from the now-defunct YouTube Red.
[...] AI is core to Microsoft's strategy, Nadella said: "AI is the run time which is going to shape all of what we do going forward in terms of applications as well as the platform." Microsoft is rethinking its core products by using AI to connect them together, he said, giving an example of a meeting using translation, transcription, Microsoft's HoloLens and other devices to improve decision-making. "The idea that you can now use all of the computing power that is around you -- this notion of the world as a computer -- completely changes how you conduct a meeting and fundamentally what presence means for a meeting," he said.
Beyond general non-discrimination practices, the declaration focuses on the individual right to remedy when algorithmic discrimination does occur. "This may include, for example, creating clear, independent, and visible processes for redress following adverse individual or societal effects," the declaration suggests, "[and making decisions] subject to accessible and effective appeal and judicial review."
John Gruber now asks: "How many real-world businesses has Google Duplex been calling and not identifying itself as an AI, leaving people to think they're actually speaking to another human...? And if 'Victor' is correct that Hong's Gourmet had no advance knowledge of the call, Google may have violated California law by recording the call." Friday he added that "This wouldn't send anyone to prison, but it would be a bit of an embarrassment, and would reinforce the notion that Google has a cavalier stance on privacy (and adhering to privacy laws)."
The Mercury News also reports that legal experts "raised questions about how Google's possible need to record Duplex's phone conversations to improve its artificial intelligence may come in conflict with California's strict two-party consent law, where all parties involved in a private phone conversation need to agree to being recorded."
For another perspective, Gizmodo's senior reviews editor reminds readers that "pretty much all tech demos are fake as hell." Speaking of Google's controversial Duplex demo, she writes that "If it didn't happen, if it is all a lie, well then I'll be totally disappointed. But I can't say I'll be surprised."
The original submission cites Isaac Asimov's Three Laws of Robotics from the 1950 collection I, Robot.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
The original submission asks, "If you programmed an AI not to be able to break an updated and extended version of Asimov's Laws, would you not have reasonable confidence that the AI won't go crazy and start harming humans? Or are Asimov and other writers who mulled these questions 'So 20th Century' that AI builders won't even consider learning from their work?"
Wolfrider (Slashdot reader #856) is an Asimov fan, and writes that "Eventually I came across an article with the critical observation that the '3 Laws' were used by Asimov to drive plot points and were not to be seriously considered as 'basics' for robot behavior. Additionally, Giskard comes up with a '4th Law' on his own and (as he is dying) passes it on to R. Daneel Olivaw."
And Slashdot reader Rick Schumann argues that Asimov's Three Laws of Robotics "would only ever apply to a synthetic mind that can actually think; nothing currently being produced is capable of any such thing, therefore it does not apply..."
But what are your own thoughts? Do you think Asimov's Three Laws of Robotics could ensure safe AI?
Elon Musk tweeted about the accident:
It's super messed up that a Tesla crash resulting in a broken ankle is front page news and the ~40,000 people who died in US auto accidents alone in past year get almost no coverage. What's actually amazing about this accident is that a Model S hit a fire truck at 60mph and the driver only broke an ankle. An impact at that speed usually results in severe injury or death.
The Associated Press defended their news coverage Friday, arguing that the facts show that "not all Tesla crashes end the same way." They also fact-check Elon Musk's claim that "probability of fatality is much lower in a Tesla," reporting that it's impossible to verify since Tesla won't release the number of miles driven by their cars or the number of fatalities. "There have been at least three already this year and a check of 2016 NHTSA fatal crash data -- the most recent year available -- shows five deaths in Tesla vehicles."
Slashdot reader Reygle argues the real issue is with the drivers in the Autopilot cars. "Someone unwilling to pay attention to the road shouldn't be allowed anywhere near that road ever again."
Suppose, for example, that a drugstore decides to entrust its pricing to a machine learning program that we'll call Charlie. The program reviews the store's records and sees that past variations of the price of toothpaste haven't correlated with changes in sales volume. So Charlie recommends raising the price to generate more revenue. A month later, the sales of toothpaste have dropped -- along with dental floss, cookies and other items. Where did Charlie go wrong? Charlie didn't understand that the previous (human) manager varied prices only when the competition did. When Charlie unilaterally raised the price, dentally price-conscious customers took their business elsewhere. The example shows that historical data alone tells us nothing about causes -- and that the direction of causation is crucial.