Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Microsoft

Microsoft Uses GPT-3 To Let You Code in Natural Language (techcrunch.com) 37

Microsoft is now using OpenAI's massive GPT-3 natural language model in its no-code/low-code Power Apps service to translate spoken text into code in its recently announced Power Fx language. From a report: Now don't get carried away. You're not going to develop the next TikTok while only using natural language. Instead, what Microsoft is doing here is taking some of the low-code aspects of a tool like Power Apps and using AI to essentially turn those into no-code experiences, too. For now, the focus here is on Power Apps formulas, which despite the low-code nature of the service, is something you'll have to write sooner or later if you want to build an app of any sophistication.

"Using an advanced AI model like this can help our low-code tools become even more widely available to an even bigger audience by truly becoming what we call no code," said Charles Lamanna, corporate vice president for Microsoft's low-code application platform. In practice, this looks like the citizen programmer writing "find products where the name starts with 'kids'" -- and Power Apps then rendering that as "Filter('BC Orders' Left('Product Name',4)="Kids")". Because Microsoft is an investor in OpenAI, it's no surprise the company chose its model to power this experience.

China

Huawei Founder Urges Shift To Software To Counter US Sanctions (reuters.com) 22

Founder of Chinese tech giant Huawei Technologies Ren Zhengfei has called on the company's staff to "dare to lead the world" in software as the company seeks growth beyond the hardware operations that U.S. sanctions have crippled. From a report: The internal memo seen by Reuters is the clearest evidence yet of the company's direction as it responds to the immense pressure sanctions have placed on the handset business that was at its core. Ren said in the memo the company was focusing on software because future development in the field is fundamentally "outside of U.S. control and we will have greater independence and autonomy." As it will be hard for Huawei to produce advanced hardware in the short term, it should focus on building software ecosystems, such as its HarmonyOS operating system, its cloud AI system Mindspore, and other IT products, the note said.
Businesses

Do You Own a Motorcycle Airbag if You Have to Pay Extra to Inflate It? (hackaday.com) 166

"Pardon me while I feed the meter on my critical safety device," quips a Hackaday article (shared by long-time Slashdot reader AmiMoJo): If you ride a motorcycle, you may have noticed that the cost of airbag vests has dropped. In one case, something very different is going on here. As reported by Motherboard, you can pick up a KLIM Ai-1 for $400 but the airbag built into it will not function until unlocked with an additional purchase, and a big one at that. So do you really own the vest for $400...?

The Klim airbag vest has two components that make it work. The vest itself is from Klim and costs $400 and arrives along with the airbag unit. But if you want it to actually detect an accident and inflate, you need load up a smartphone app and activate a small black box made by a different company: In&Motion. That requires your choice of another $400 payment or you can subscribe at $12 a month or $120 a year.

If you fail to renew, the vest is essentially worthless.

Hackaday notes it raises the question of what it means to own a piece of technology.

"Do you own your cable modem or cell phone if you aren't allowed to open it up? Do you own a piece of software that wants to call home periodically and won't let you stop it?"
AI

RAI's Certification Process Aims To Prevent AIs From Turning Into HALs (engadget.com) 71

An anonymous reader quotes a report from Engadget: [T]he Responsible Artificial Intelligence Institute (RAI) -- a non-profit developing governance tools to help usher in a new generation of trustworthy, safe, Responsible AIs -- hopes to offer a more standardized means of certifying that our next HAL won't murder the entire crew. In short they want to build "the world's first independent, accredited certification program of its kind." Think of the LEED green building certification system used in construction but with AI instead. Work towards this certification program began nearly half a decade ago alongside the founding of RAI itself, at the hands of Dr. Manoj Saxena, University of Texas Professor on Ethical AI Design, RAI Chairman and a man widely considered to be the "father" of IBM Watson, though his initial inspiration came even further back.

Certifications are awarded in four levels -- basic, silver, gold, and platinum (sorry, no bronze) -- based on the AI's scores along the five OECD principles of Responsible AI: interpretability/explainability, bias/fairness, accountability, robustness against unwanted hacking or manipulation, and data quality/privacy. The certification is administered via questionnaire and a scan of the AI system. Developers must score 60 points to reach the base certification, 70 points for silver and so on, up to 90 points-plus for platinum status. [Mark Rolston, founder and CCO of argodesign] notes that design analysis will play an outsized role in the certification process. "Any company that is trying to figure out whether their AI is going to be trustworthy needs to first understand how they're constructing that AI within their overall business," he said. "And that requires a level of design analysis, both on the technical front and in terms of how they're interfacing with their users, which is the domain of design."

RAI expects to find (and in some cases has already found) a number of willing entities from government, academia, enterprise corporations, or technology vendors for its services, though the two are remaining mum on specifics while the program is still in beta (until November 15th, at least). Saxena hopes that, like the LEED certification, RAI will eventually evolve into a universalized certification system for AI. He argues, it will help accelerate the development of future systems by eliminating much of the uncertainty and liability exposure today's developers -- and their harried compliance officers -- face while building public trust in the brand. "We're using standards from IEEE, we are looking at things that ISO is coming out with, we are looking at leading indicators from the European Union like GDPR, and now this recently announced algorithmic law," Saxena said. "We see ourselves as the 'do tank' that can operationalize those concepts and those think tank's work."

Google

Google Unit DeepMind Tried and Failed to Win AI Autonomy From Parent (wsj.com) 32

Senior managers at Google artificial-intelligence unit DeepMind have been negotiating for years with the parent company for more autonomy, seeking an independent legal structure for the sensitive research they do. From a report: DeepMind told staff late last month that Google called off those talks, WSJ reported Friday, citing people familiar with the matter. The end of the long-running negotiations, which hasn't previously been reported, is the latest example of how Google and other tech giants are trying to strengthen their control over the study and advancement of artificial intelligence. Earlier this month, Google unveiled plans to double the size of its team studying the ethics of artificial intelligence and to consolidate that research.

[...] DeepMind's founders had sought, among other ideas, a legal structure used by nonprofit groups, reasoning that the powerful artificial intelligence they were researching shouldn't be controlled by a single corporate entity, according to people familiar with those plans. On a video call last month with DeepMind staff, co-founder Demis Hassabis said the unit's effort to negotiate a more autonomous corporate structure was over, according to people familiar with the matter. He also said DeepMind's AI research and its application would be reviewed by an ethics board staffed mostly by senior Google executives.

Supercomputing

Google Plans To Build a Commercial Quantum Computer By 2029 (engadget.com) 56

Google developers are confident they can build a commercial-grade quantum computer by 2029. Engadget reports: Google CEO Sundar Pichai announced the plan during today's I/O stream, and in a blog post, quantum AI lead engineer Erik Lucero further outlined the company's goal to "build a useful, error-corrected quantum computer" within the decade. Executives also revealed Google's new campus in Santa Barbara, California, which is dedicated to quantum AI. The campus has Google's first quantum data center, hardware research laboratories, and the company's very own quantum processor chip fabrication facilities.

"As we look 10 years into the future, many of the greatest global challenges, from climate change to handling the next pandemic, demand a new kind of computing," Lucero said. "To build better batteries (to lighten the load on the power grid), or to create fertilizer to feed the world without creating 2 percent of global carbon emissions (as nitrogen fixation does today), or to create more targeted medicines (to stop the next pandemic before it starts), we need to understand and design molecules better. That means simulating nature accurately. But you can't simulate molecules very well using classical computers."

Microsoft

Microsoft Teams Launches For Friends and Family With Free All-Day Video Calling (theverge.com) 59

Microsoft is launching the personal version of Microsoft Teams today. After previewing the service nearly a year ago, Microsoft Teams is now available for free personal use amongst friends and families. From a report: The service itself is almost identical to the Microsoft Teams that businesses use, and it will allow people to chat, video call, and share calendars, locations, and files easily. Microsoft is also continuing to offer everyone free 24-hour video calls that it introduced in the preview version in November. You'll be able to meet up with up to 300 people in video calls that can last for 24 hours. Microsoft will eventually enforce limits of 60 minutes for group calls of up to 100 people after the pandemic, but keep 24 hours for 1:1 calls. While the preview initially launched on iOS and Android, Microsoft Teams for personal use now works across the web, mobile, and desktop apps. Microsoft is also allowing Teams personal users to enable its Together mode -- a feature that uses AI to segment your face and shoulders and place you together with other people in a virtual space. Skype got this same feature back in December.
AI

AI Tool Writes Real Estate Descriptions Without Ever Stepping Inside a Home (cnn.com) 32

A Canadian startup called Listing AI is using AI to quickly churn out computer-generated descriptions of real estate. All users need to do is give it some details about the home, and the AI does the rest. CNN reports: "L O V E L Y Oakland!" the house description began. It went on to give a slew of details about the 1,484 square-foot home -- light-filled, charming, Mediterranean-style, with a yard that "boasts lush front landscaping" -- and finished by describing the "cozy fireplace" and "rustic-chic" pressed tin ceiling in the living room. The results still need work: The real-life Oakland, California, home that fits with the above description (which my family is currently selling) actually has a pressed tin ceiling in the dining room, rather than the living room, for instance. The descriptions Listing AI created for me are not nearly as specific or well-written as the one crafted by our (human) realtor. And I had to provide the website with a lot of information about different rooms and features of the house and the outdoor landscaping -- a process that felt a bit like real-estate Mad Libs -- before the website was able to come up with several different descriptions.

But the general coherence of the descriptions that Listing AI proposed within seconds of my submission provides yet another sign that AI is getting better at a task that was traditionally seen as uniquely human -- and shows how people may be able to work with the technology, rather than fearing it may replace us. It probably won't do all the work of writing a house description for you, but according to Listing AI co-founder Mustafa Al-Hayali, that's not the point. He hopes it will complete about 80% to 90% of the work for coming up with a home description, which may be completed by a realtor or a copy writer. "I don't believe it's meant to replace a person when it comes to completing a task, but it's supposed to make their job a whole lot easier," Al-Hayali told CNN Business. "It can generate ideas you can use."
The information used in the app is processed by GPT-3, an AI model from nonprofit research company OpenAI. According to MIT Technology Review, GPT-3 could herald a new type of search engine.
Google

Language Models Like GPT-3 Could Herald a New Type of Search Engine (technologyreview.com) 13

An anonymous reader quotes a report from MIT Technology Review: In 1998 a couple of Stanford graduate students published a paper describing a new kind of search engine: "In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems." The key innovation was an algorithm called PageRank, which ranked search results by calculating how relevant they were to a user's query on the basis of their links to other pages on the web. On the back of PageRank, Google became the gateway to the internet, and Sergey Brin and Larry Page built one of the biggest companies in the world. Now a team of Google researchers has published a proposal for a radical redesign that throws out the ranking approach and replaces it with a single large AI language model, such as BERT or GPT-3 -- or a future version of them. The idea is that instead of searching for information in a vast list of web pages, users would ask questions and have a language model trained on those pages answer them directly. The approach could change not only how search engines work, but what they do -- and how we interact with them.

[Donald Metzler and his colleagues at Google Research] are interested in a search engine that behaves like a human expert. It should produce answers in natural language, synthesized from more than one document, and back up its answers with references to supporting evidence, as Wikipedia articles aim to do. Large language models get us part of the way there. Trained on most of the web and hundreds of books, GPT-3 draws information from multiple sources to answer questions in natural language. The problem is that it does not keep track of those sources and cannot provide evidence for its answers. There's no way to tell if GPT-3 is parroting trustworthy information or disinformation -- or simply spewing nonsense of its own making.

Metzler and his colleagues call language models dilettantes -- "They are perceived to know a lot but their knowledge is skin deep." The solution, they claim, is to build and train future BERTs and GPT-3s to retain records of where their words come from. No such models are yet able to do this, but it is possible in principle, and there is early work in that direction. There have been decades of progress on different areas of search, from answering queries to summarizing documents to structuring information, says Ziqi Zhang at the University of Sheffield, UK, who studies information retrieval on the web. But none of these technologies overhauled search because they each address specific problems and are not generalizable. The exciting premise of this paper is that large language models are able to do all these things at the same time, he says.

AI

GTA 5 Graphics Are Now Being Boosted By Advanced AI At Intel (gizmodo.com) 44

Researchers at Intel Labs have applied machine learning techniques to GTA 5 to make it look incredibly realistic. Gizmodo reports: [I]nstead of training a neural network on famous masterpieces, the researchers at Intel Labs relied on the Cityscapes Dataset, a collection of images of a German city's urban center captured by a car's built-in camera, for training. When a different artistic style is applied to footage using machine learning techniques, the results are often temporally unstable, which means that frame by frame there are weird artifacts jumping around, appearing and reappearing, that diminish how real the results look. With this new approach, the rendered effects exhibit none of those telltale artifacts, because in addition to processing the footage rendered by Grand Theft Auto V's game engine, the neural network also uses other rendered data the game's engine has access to, like the depth of objects in a scene, and information about how the lighting is being processed and rendered.

That's a gross simplification -- you can read a more in-depth explanation of the research here -- but the results are remarkably photorealistic. The surface of the road is smoothed out, highlights on vehicles look more pronounced, and the surrounding hills in several clips look more lush and alive with vegetation. What's even more impressive is that the researchers think, with the right hardware and further optimization, the gameplay footage could be enhanced by their convolutional network at "interactive rates" -- another way to say in real-time -- when baked into a video game's rendering engine.

AI

Voice Actor Reportedly Responsible For Amazon Alexa Revealed (theverge.com) 23

An anonymous reader quotes a report from The Verge: Amazon's Alexa has a voice familiar to millions: calm, warm, and measured. But like most synthetic speech, its tones have a human origin. There was someone whose voice had to be recorded, analyzed, and algorithmically reproduced to create Alexa as we know it now. Amazon has never revealed who this "original Alexa" is, but journalist Brad Stone says he tracked her down, and she is Nina Rolle, a voiceover artist based in Boulder, Colorado. The claim comes from Stone's upcoming book on the tech giant, Amazon Unbound, an excerpt of which is published here in Wired. Neither Amazon nor Rolle confirmed or denied Stone's reporting, which he says is based on conversations with the professional voiceover community, but Rolle's voice alone makes for a compelling case.

Here's how Stone writes up the process in selecting Alexa's voice: "Believing that the selection of the right voice for Alexa was critical, [then-Amazon exec Greg] Hart and colleagues spent months reviewing the recordings of various candidates that GM Voices produced for the project, and presented the top picks to Bezos. The Amazon team ranked the best ones, asked for additional samples, and finally made a choice. Bezos signed off on it. Characteristically secretive, Amazon has never revealed the name of the voice artist behind Alexa. I learned her identity after canvasing the professional voice-over community: Boulder, Colorado -- based voice actress and singer Nina Rolle. Her professional website contains links to old radio ads for products such as Mott's Apple Juice and the Volkswagen Passat -- and the warm timbre of Alexa's voice is unmistakable. Rolle said she wasn't allowed to talk to me when I reached her on the phone in February 2021. When I asked Amazon to speak with her, they declined."

Google

Google Plans To Double AI Ethics Research Staff (wsj.com) 49

Alphabet's Google plans to double the size of its team studying artificial-intelligence ethics in the coming years, as the company looks to strengthen a group that has had its credibility challenged by research controversies and personnel defections. From a report: Vice President of Engineering Marian Croak said at The Wall Street Journal's Future of Everything Festival that the hires will increase the size of the responsible AI team that she leads to 200 researchers. Additionally, she said that Alphabet Chief Executive Sundar Pichai has committed to boost the operating budget of a team tasked with evaluating code and product to avert harm, discrimination and other problems with AI. "Being responsible in the way that you develop and deploy AI technology is fundamental to the good of the business," Ms. Croak said. "It severely damages the brand if things aren't done in an ethical way." Google announced in February that Ms. Croak would lead the AI ethics group after it fired the division's co-head, Margaret Mitchell, for allegedly sharing internal documents with people outside the company. Ms. Mitchell's exit followed criticism of Google's suppression of research last year by a prominent member of the team, Timnit Gebru, who says she was fired because of studies critical of the company's approach to AI. Mr. Pichai pledged an investigation into the circumstances around Ms. Gebru's departure and said he would seek to restore trust.
Programming

IBM's CodeNet Dataset Can Teach AI To Translate Computer Languages (engadget.com) 40

IBM announced during its Think 2021 conference on Monday that its researchers have crafted a Rosetta Stone for programming code. Engadget reports: In effect, we've taught computers how to speak human, so why not also teach computers to speak more computer? That's what IBM's Project CodeNet seeks to accomplish. "We need our ImageNet, which can snowball the innovation and can unleash this innovation in algorithms," [Ruchir Puri, IBM Fellow and Chief Scientist at IBM Research, said during his Think 2021 presentation]. CodeNet is essentially the ImageNet of computers. It's an expansive dataset designed to teach AI/ML systems how to translate code and consists of some 14 million snippets and 500 million lines spread across more than 55 legacy and active languages -- from COBOL and FORTRAN to Java, C++, and Python.

"Since the data set itself contains 50 different languages, it can actually enable algorithms for many pairwise combinations," Puri explained. "Having said that, there has been work done in human language areas, like neural machine translation which, rather than doing pairwise, actually becomes more language-independent and can derive an intermediate abstraction through which it translates into many different languages." In short, the dataset is constructed in a manner that enables bidirectional translation. That is, you can take some legacy COBOL code -- which, terrifyingly, still constitutes a significant amount of this country's banking and federal government infrastructure -- and translate it into Java as easily as you could take a snippet of Java and regress it back into COBOL.

CodeNet can be used for functions like code search and clone detection, in addition to its intended translational duties and serving as a benchmark dataset. Also, each sample is labeled with its CPU run time and memory footprint, allowing researchers to run regression studies and potentially develop automated code correction systems. Project CodeNet consists of more than 14 million code samples along with 4000-plus coding problems collected and curated from decades' of programming challenges and competitions across the globe. "The way the data set actually came about," Puri said, "there are many kinds of programming competitions and all kinds of problems -- some of them more businesslike, some of them more academic. These are the languages that have been used over the last decade and a half in many of these competitions with 1000s of students or competitors submitting solutions." Additionally, users can run individual code samples "to extract metadata and verify outputs from generative AI models for correctness," according to an IBM press release. "This will enable researchers to program intent equivalence when translating one programming language into another." [...] IBM intends to release the CodeNet data to the public domain, allowing researchers worldwide equal and free access.

Privacy

Unlike Clearview AI, this Facial-Recognition Search Engine is Open to Everyone (cnn.com) 30

This week CNN investigated PimEyes, a "mysterious" but powerful facial-recognition search engine: If you upload a picture of your face to PimEyes' website, it will immediately show you any pictures of yourself that the company has found around the internet. You might recognize all of them, or be surprised (or, perhaps, even horrified) by some; these images may include anything from wedding or vacation snapshots to pornographic images. PimEyes is open to anyone with internet access. It's a stark contrast from Clearview AI, which became well-known for building its enormous stash of faces with images of people from social networks and limits its use to law enforcement (Clearview has said it has hundreds of such customers).

PimEyes' decision to make facial-recognition software available to the general public crosses a line that technology companies are typically unwilling to traverse, and opens up endless possibilities for how it can be used and abused. Imagine a potential employer digging into your past, an abusive ex tracking you, or a random stranger snapping a photo of you in public and then finding you online. This is all possible through PimEyes: Though the website instructs users to search for themselves, it doesn't stop them from uploading photos of anyone. At the same time, it doesn't explicitly identify anyone by name, but as CNN Business discovered by using the site, that information may be just clicks away from images PimEyes pulls up...

PimEyes lets users see a limited number of small, somewhat pixelated search results at no cost, or you can pay a monthly fee, which starts at $29.99, for more extensive search results and features (such as to click through to see full-size images on the websites where PimEyes found them and to set up alerts for when PimEyes finds new pictures of faces online that its software believes match an uploaded face)... Although PimEyes instructs visitors to only search for their own face, there's no mechanism on the site to ensure it's used this way... There's also no way to ensure this facial-recognition technology isn't used to misidentify people...

The website currently lists no information about who owns or runs the search engine, or how to reach them, and users must submit a form to get answers to questions or help with accounts.

Open Source

Linux Foundation Launches Open Source Agriculture Infrastructure Project (venturebeat.com) 20

"The Linux Foundation has lifted the lid on a new open source digital infrastructure project aimed at the agriculture industry," reports VentureBeat: The AgStack Foundation, as the new project will be known, is designed to foster collaboration among all key stakeholders in the global agriculture space, spanning private business, governments, and academia.

As with just about every other industry in recent years, there has been a growing digital transformation across the agriculture sector that has ushered in new connected devices for farmers and myriad AI and automated tools to optimize crop growth and circumvent critical obstacles, such as labor shortages. Open source technologies bring the added benefit of data and tools that any party can reuse for free, lowering the barrier to entry and helping keep companies from getting locked into proprietary software operated by a handful of big players...

The AgStack Foundation will be focused on supporting the creation and maintenance of free and sector-specific digital infrastructure for both applications and the associated data. It will lean on existing technologies and agricultural standards; public data and models; and other open source projects, such as Kubernetes, Hyperledger, Open Horizon, Postgres, and Django, according to a statement.

"Current practices in AgTech are involved in building proprietary infrastructure and point-to-point connectivity in order to derive value from applications," AgStack executive director Sumer Johal told VentureBeat. "This is an unnecessarily costly use of human capital. Like an operating system, we aspire to reduce the time and effort required by companies to produce their own proprietary applications and for content consumers to consume this interoperably."

AI

Deepfake Satellite Imagery Poses a Not-so-Distant Threat (theverge.com) 30

Long-time Slashdot reader AmiMoJo quotes the Verge's warning about "deepfake geography: AI-generated images of cityscapes and countryside." Specifically, geographers are concerned about the spread of fake, AI-generated satellite imagery. Such pictures could mislead in a variety of ways. They could be used to create hoaxes about wildfires or floods, or to discredit stories based on real satellite imagery... Deepfake geography might even be a national security issue, as geopolitical adversaries use fake satellite imagery to mislead foes...

The first step to tackling these issues is to make people aware there's a problem in the first place, says Bo Zhao, an assistant professor of geography at the University of Washington. Zhao and his colleagues recently published a paper on the subject of "deep fake geography," which includes their own experiments generating and detecting this imagery... As part of their study, Zhao and his colleagues created software to generate deepfake satellite images, using the same basic AI method (a technique known as generative adversarial networks, or GANs) used in well-known programs like ThisPersonDoesNotExist.com. They then created detection software that was able to spot the fakes based on characteristics like texture, contrast, and color. But as experts have warned for years regarding deepfakes of people, any detection tool needs constant updates to keep up with improvements in deepfake generation.

Transportation

When Autonomous Cars Teach Themselves To Drive Better Than Humans (ieee.org) 86

schwit1 shares a report from IEEE Spectrum, written by Evan Ackerman: A few weeks ago, the CTO of Cruise tweeted an example of one of their AVs demonstrating a safety behavior where it moves over to make room for a cyclist. What's interesting about this behavior, though, is that the AV does this for cyclists approaching rapidly from behind the vehicle, something a human is far less likely to notice, much less react to. A neat trick -- but what does it mean, and what's next? In the video [here], as the cyclist approaches from the rear right side at a pretty good clip, you can see the autonomous vehicle pull to the left a little bit, increasing the amount of space that the cyclist can use to pass on the right.

One important question that we're not really going to tackle here is whether this is even a good idea in the first place, since (as a cyclist) I'd personally prefer that cars be predictable rather than sometimes doing weirdly nice things that I might not be prepared for. But that's one of the things that makes cyclists tricky: we're unpredictable. And for AVs, dealing with unpredictable things is notoriously problematic. Cruise's approach to this, explains Rashed Haq, VP of Robotics at Cruise, is to try to give their autonomous system some idea of how unpredictable cyclists can be, and then plan its actions accordingly. Cruise has collected millions of miles of real-world data from its sensorized vehicles that include cyclists doing all sorts of things. And their system has built up a model of how certain it can be that when it sees a cyclist, it can accurately predict what that cyclist is going to do next.

Essentially, based on its understanding of the unpredictability of cyclists, the Cruise AV determined that the probability of a safe interaction is improved when it gives cyclists more space, so that's what it tries to do whenever possible. This behavior illustrates some of the critical differences between autonomous and human-driven vehicles. Humans drive around with relatively limited situational awareness and deal with things like uncertainty primarily on a subconscious level. AVs, on the other hand, are constantly predicting the future in very explicit ways. Humans tend to have the edge when something unusual happens, because we're able to instantly apply a lifetime's worth of common-sense knowledge about the world to our decision-making process. Meanwhile, AVs are always considering the safest next course of action across the entire space that they're able to predict.

AI

White House Launches New AI Website (axios.com) 22

The White House has launched a new website, AI.gov, to make artificial intelligence research more accessible across the nation. Axios: The U.S. once led significantly in the global artificial intelligence race, but now risks being overtaken by China. This is one step the White House is taking to drum up excitement for AI and broaden educational opportunities in the field. The website's target audience is the general public, and its purpose is to make public information available on AI more visible to someone like a teacher or student interested in science. Users will be able to visit the website to learn how artificial intelligence is being used across the nation in a variety of ways, including to respond to the COVID pandemic and weather forecasting, for example. It's also meant to be a tool to advance research.
AI

Musk's Claims Challenged About Absence of Autopilot in Texas Tesla Crash (cnn.com) 205

"Despite early claims by #Tesla #ElonMusk, Autopilot WAS engaged in tragic crash in The Woodlands," tweeted U.S. Congressman Kevin Brady on Wednesday. (Adding "We need answers.")

But maybe it depends on how you define Autopilot. CNN reports: Tesla said Monday that one of Autopilot's features was active during the April 17 crash that killed two men in Spring, Texas....

Lars Moravy, Tesla's vice president of vehicle engineering, said on the company's earnings call Monday that Tesla's adaptive cruise control was engaged and accelerated to 30 mph before the car crashed. Autopilot is a suite of driver assistance features, including traffic-aware cruise control and Autosteer, according to Tesla's website... The North American owner's manuals for the Model 3, Model S and Model X, all describe traffic-aware cruise control as an Autopilot feature. Tesla's revelation may be at odds with the initial description of the crash from its CEO Elon Musk, who said two days after the crash that "data logs recovered so far show Autopilot was not enabled."

Alternately, Forbes suggests there may just be some confusion, noting that earnings call included descriptions of tests Tesla performed on one of their own cars after the accident. So when they said adaptive cruise control "only accelerated the car to 30mph [over] the distance before the car crashed," they could just have been referring to their own experiments. (Tesla also points out adaptive cruise control only engages when the driver is buckled — and disengages slowly if they're unbuckled — and after the Texas crash all seat belts were unbuckled.)

Why so much confusion? Part of the problem may be, as CNN points out, that Tesla "generally does not engage with the professional news media."

But The Drive shares another theory about the crash: A relative of the deceased told a local news station that the owner allegedly "may have hopped in the back seat after backing the car out of the driveway." Moments later, the car crashed when it failed to negotiate a turn at high speed.
CNN adds: Bryan Reimer, the associate director of the New England University Transportation Center at MIT, who studies driver assistance systems like Autopilot, said one of the plausible explanations for the crash is that the driver was confused and thought they had activated Autosteer, when only traffic-aware cruise control had been turned on. "The general understanding of Autopilot is that it's one feature, but in reality it is two things bolted together," said Reimer, referring to traffic-aware cruise control and Autosteer.
But according to the Washington Post, Tesla also disputes that theory: Tesla executives on Monday claimed a driver was behind the wheel at the time of a fatal crash that killed two in suburban Houston this month, contradicting local authorities who have previously said they were certain no one was in that seat. Tesla made the statement on its earnings call Monday... Lars Moravy, the company's vice president of vehicle engineering, said the steering wheel was "deformed," indicating a driver's presence at the time of the crash...

Mark Herman, constable for Harris County Precinct 4, told the station KHOU that police were "100 percent certain that no one was in the driver's seat."

Role Playing (Games)

AI-Generated Text Adventure Community Angry Content Moderators May Read Their Erotica (vice.com) 56

Vice reports: The AI-powered story generator AI Dungeon has come under fire from fans recently for changes to how the development team moderates content. Notably, the player base is worried that the AI Dungeon developers will be reading their porn stories in the game. Separately, a hacker recently revealed vulnerabilities in the game that show that roughly half of the game's content is porn.

AI Dungeon is a text based adventure game where, instead of playing through a scenario entirely designed by someone else, the responses to the prompts you type are generated by an AI... This week, AI Dungeon players noticed that more of their stories were being flagged by the content moderation system, and flagged more frequently. Latitude, the developers of AI Dungeon, released a blog post explaining that it had implemented a new algorithm for content moderation specifically to look for content that involves "sexual content involving minors... We did not communicate this test to the Community in advance, which created an environment where users and other members of our larger community, including platform moderators, were caught off guard... Latitude reviews content flagged by the model for the purposes of improving the model, to enforce our policies, and to comply with law."

Latitude later clarified in its Discord at what point a human moderator would read private stories on AI Dungeon. It said that if a story appears to be incorrectly flagged, human moderators would stop reading the inputs, but that if a story appeared to be correctly flagged then they "may look at the user's other stories for signs that the user may be using AI Dungeon for prohibited purposes." Latitude CEO Nick Walton told Motherboard that human moderators only look at stories in the "very few cases" that they violate the terms of service...

All of this has been compounded by the fact that a security researcher named AetherDevSecOpsjust published a lengthy report on security issues with AI Dungeon on GitHub, which included one that allowed them to look at all the user input data stored in AI Dungeon. About a solid third of stories on AI Dungeon are sexually explicit, and about half are marked as NSFW, AetherDevSecOpsjust estimated.

Slashdot Top Deals

All your files have been destroyed (sorry). Paul.

Working...