Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel

Intel's Dystopian Anti-Harassment AI Lets Users Opt In for 'Some' Racism (vice.com) 131

Intel is launching an artificial intelligence application that will recognize and redact hate speech in real-time. It's called Bleep, and Intel hopes it'll help with one of gaming's oldest and most intractable problems -- people can be real pieces of shit online. From a report: A video of the app shows that it will allow users to customize what kind and how much hate speech they want to see, including "Racism" and "White Nationalism" sliders that can be set to "none," "some," "most," or "all," and a separate on and off toggle for the "N-word." "While we recognize that solutions like Bleep don't erase the problem, we believe it's a step in the right direction -- giving gamers a tool to control their experience," Roger Chandler, Vice President and General Manager of Intel Client Product Solutions, said during a virtual presentation at 2021's Game Developers Conference.

According to Intel Marketing Engineer Craig Raymond, Bleep is "an end-user application that uses AI to detect and redact audio based on your user preferences." In footage of the application, Bleep presented users with a list of sliders so gamers can control the amount of hate and abuse they encounter. The list included ableism and body shaming, LGBTQ+ hate, aggression, misogyny, name-calling, racism and xenophobia, sexually explicit language, swearing, and white nationalism. As Chandler explained, Intel can't "solve" racism or the long-running and well-documented problems in gaming culture (and culture more broadly). At the same time, Bleep is techno-AI solutionism that feels pretty dystopian, pitching racism, xenophobia, and general toxicity as settings that can be tuned up and down as though they were graphics, sound, or control sliders on a video game. It is also a way of admitting defeat: if we can't stop players from being incredibly racist in chat, we can simply filter out what they say and pretend they don't exist.

AI

Government Audit of AI With Ties To White Supremacy Finds No AI (venturebeat.com) 148

Khari Johnson writes via VentureBeat: In April 2020, news broke that Banjo CEO Damien Patton, once the subject of profiles by business journalists, was previously convicted of crimes committed with a white supremacist group. According to OneZero's analysis of grand jury testimony and hate crime prosecution documents, Patton pled guilty to involvement in a 1990 shooting attack on a synagogue in Tennessee. Amid growing public awareness about algorithmic bias, the state of Utah halted a $20.7 million contract with Banjo, and the Utah attorney general's office opened an investigation into matters of privacy, algorithmic bias, and discrimination. But in a surprise twist, an audit and report released last week found no bias in the algorithm because there was no algorithm to assess in the first place.

"Banjo expressly represented to the Commission that Banjo does not use techniques that meet the industry definition of artificial Intelligence. Banjo indicated they had an agreement to gather data from Twitter, but there was no evidence of any Twitter data incorporated into Live Time," reads a letter Utah State Auditor John Dougall released last week. The incident, which VentureBeat previously referred to as part of a "fight for the soul of machine learning," demonstrates why government officials must evaluate claims made by companies vying for contracts and how failure to do so can cost taxpayers millions of dollars. As the incident underlines, companies selling surveillance software can make false claims about their technologies' capabilities or turn out to be charlatans or white supremacists -- constituting a public nuisance or worse. The audit result also suggests a lack of scrutiny can undermine public trust in AI and the governments that deploy them.

Google

Google AI Research Manager Quits After Two Ousted From Group (bloomberg.com) 82

Google research manager Samy Bengio, who oversaw the company's AI ethics group until a controversy led to the ouster of two female leaders, resigned on Tuesday to pursue other opportunities. Bloomberg reports: Bengio, who managed hundreds of researchers in the Google Brain team, announced his departure in an email to staff that was obtained by Bloomberg. His last day will be April 28. An expert in a type of AI known as machine learning, Bengio joined Google in 2007. Ousted Ethical AI co-leads Timnit Gebru and Margaret Mitchell had reported to Bengio and considered him an ally. In February, Google reorganized the research unit, placing the remaining Ethical AI group members under Marian Croak, cutting Bengio's responsibilities.

"While I am looking forward to my next challenge, there's no doubt that leaving this wonderful team is really difficult," Bengio wrote in the email. "I learned so much with all of you, in terms of machine learning research of course, but also on how difficult yet important it is to organize a large team of researchers so as to promote long term ambitious research, exploration, rigor, diversity and inclusion," Bengio wrote in his email. He did not refer to Gebru, Mitchell or the disagreements that led to their departures. [...]

Intel

Intel Launches First 10nm 3rd Gen Xeon Scalable Processors For Data Centers (hothardware.com) 42

MojoKid writes: Intel just officially launched its first server products built on its advanced 10nm manufacturing process node, the 3rd Gen Xeon Scalable family of processors. 3rd Gen Xeon Scalable processors are based on the 10nm Ice Lake-SP microarchitecture, which incorporates a number of new features and enhancements. Core counts have been significantly increased with this generation, and now offer up to 40 cores / 80 threads per socket versus 28 cores / 56 threads in Intel's previous-gen offerings. The 3rd Gen Intel Xeon Scalable processor platform also supports up to 8 channels of DDR4-3200 memory, up to 6 terabytes of total memory, and up to 64 lanes of PCIe Gen4 connectivity per socket, for more bandwidth, higher capacity, and copious IO.

New AI, security and cryptographic capabilities arrive with the platform as well. Across Cloud, HPC, 5G, IoT, and AI workloads, new 3rd Gen Xeon Scalable processors are claimed to offer significant uplifts across the board versus their previous-gen counterparts. And versus rival AMD's EPYC platform, Intel is also claiming many victories, specifically when AVX-512, new crypto instructions, or DL Boost are added to the equation. Core counts in the line-up range from 8 — 40 cores per processor and TDPs vary depending on the maximum base and boost frequencies and core count / configuration (up to a 270W TDP). Intel is currently shipping 3rd Gen Xeon Scalable CPUs to key customers now, with over 200K chips in Q1 this year and a steady ramp-up to follow.

IBM

Why IBM is Pushing 'Fully Homomorphic Encryption' (venturebeat.com) 122

VentureBeat reports on a "next-generation security" technique that allows data to remain encrypted while it's being processed.

"A security process known as fully homomorphic encryption is now on the verge of making its way out of the labs and into the hands of early adopters after a long gestation period." Companies such as Microsoft and Intel have been big proponents of homomorphic encryption. Last December, IBM made a splash when it released its first homomorphic encryption services. That package included educational material, support, and prototyping environments for companies that want to experiment. In a recent media presentation on the future of cryptography, IBM director of strategy and emerging technology Eric Maass explained why the company is so bullish on "fully homomorphic encryption" (FHE)...

"IBM has been working on FHE for more than a decade, and we're finally reaching an apex where we believe this is ready for clients to begin adopting in a more widespread manner," Maass said. "And that becomes the next challenge: widespread adoption. There are currently very few organizations here that have the skills and expertise to use FHE." To accelerate that development, IBM Research has released open source toolkits, while IBM Security launched its first commercial FHE service in December...

Maass said in the near term, IBM envisions FHE being attractive to highly regulated industries, such as financial services and health care. "They have both the need to unlock the value of that data, but also face extreme pressures to secure and preserve the privacy of the data that they're computing upon," he said.

The Wikipedia entry for homomorphic encryption calls it "an extension of either symmetric-key or public-key cryptography."
AI

A South Korean Chatbot Showed How Sloppy Tech Companies Can Be With User Data (slate.com) 11

A "Science of Love" app analyzed text conversations uploaded by its users to assess the degree of romantic feelings (based on the phrases and emojis used and the average response time). Then after more than four years, its parent company ScatterLab introduced a conversational A.I. chatbot called Lee-Luda — which it said had been trained on 10 billion such conversational logs.

But because it used billions of conversations from real people, its problems soon went beyond sexually explicit comments and "verbally abusive" language: It also soon became clear that the huge training dataset included personal and sensitive information. This revelation emerged when the chatbot began exposing people's names, nicknames, and home addresses in its responses. The company admitted that its developers "failed to remove some personal information depending on the context," but still claimed that the dataset used to train chatbot Lee-Luda "did not include names, phone numbers, addresses, and emails that could be used to verify an individual." However, A.I. developers in South Korea rebutted the company's statement, asserting that Lee-Luda could not have learned how to include such personal information in its responses unless they existed in the training dataset. A.I. researchers have also pointed out that it is possible to recover the training dataset from the AI chatbot. So, if personal information existed in the training dataset, it can be extracted by querying the chatbot.

To make things worse, it was also discovered that ScatterLab had, prior to Lee-Luda's release, uploaded a training set of 1,700 sentences, which was a part of the larger dataset it collected, on Github. Github is an open-source platform that developers use to store and share code and data. This Github training dataset exposed names of more than 20 people, along with the locations they have been to, their relationship status, and some of their medical information...

[T]his incident highlights the general trend of the A.I. industry, where individuals have little control over how their personal information is processed and used once collected. It took almost five years for users to recognize that their personal data were being used to train a chatbot model without their consent. Nor did they know that ScatterLab shared their private conversations on an open-source platform like Github, where anyone can gain access.

What makes this unusual, the article points out, is how the users became aware of just how much their privacy had actually been compromised. "[B]igger tech companies are usually much better at hiding what they actually do with user data, while restricting users from having control and oversight over their own data."

And "Once you give, there's no taking back."
Music

Mixed Reactions to New Nirvana Song Generated by Google's AI (engadget.com) 88

On the 27th anniversary of Kurt Cobain's death, Engadget reports: Were he still alive today, Nirvana frontman Kurt Cobain would be 52 years old. Every February 20th, on the day of his birthday, fans wonder what songs he would write if he hadn't died of suicide nearly 30 years ago. While we'll never know the answer to that question, an AI is attempting to fill the gap.

A mental health organization called Over the Bridge used Google's Magenta AI and a generic neural network to examine more than two dozen songs by Nirvana to create a 'new' track from the band. "Drowned in the Sun" opens with reverb-soaked plucking before turning into an assault of distorted power chords. "I don't care/I feel as one, drowned in the sun," Nirvana tribute band frontman Eric Hogan sings in the chorus. In execution, it sounds not all that dissimilar from "You Know You're Right," one of the last songs Nirvana recorded before Cobain's death in 1994.

Other than the voice of Hogan, everything you hear in the song was generated by the two AI programs Over the Bridge used. The organization first fed Magenta songs as MIDI files so that the software could learn the specific notes and harmonies that made the band's tunes so iconic. Humorously, Cobain's loose and aggressive guitar playing style gave Magenta some trouble, with the AI mostly outputting a wall of distortion instead of something akin to his signature melodies. "It was a lot of trial and error," Over the Bridge board member Sean O'Connor told Rolling Stone. Once they had some musical and lyrical samples, the creative team picked the best bits to record. Most of the instrumentation you hear are MIDI tracks with different effects layered on top.

Some thoughts from The Daily Dot: Rolling Stone also highlighted lyrics like, "The sun shines on you but I don't know how," and what is called "a surprisingly anthemic chorus" including the lines, "I don't care/I feel as one, drowned in the sun," remarking that they "bear evocative, Cobain-esque qualities...."

Neil Turkewitz went full Comic Book Guy, opining, "A perfect illustration of the injustice of developing AI through the ingestion of cultural works without the authorization of [its] creator, and how it forces creators to be indentured servants in the production of a future out of their control," adding, "That it's for a good cause is irrelevant."

AI

Even the Best Speech Recognition Systems Exhibit Bias, Study Finds (venturebeat.com) 142

An anonymous reader quotes a report from VentureBeat: Even state-of-the-art automatic speech recognition (ASR) algorithms struggle to recognize the accents of people from certain regions of the world. That's the top-line finding of a new study published by researchers at the University of Amsterdam, the Netherlands Cancer Institute, and the Delft University of Technology, which found that an ASR system for the Dutch language recognized speakers of specific age groups, genders, and countries of origin better than others. The coauthors of this latest research set out to investigate how well an ASR system for Dutch recognizes speech from different groups of speakers. In a series of experiments, they observed whether the ASR system could contend with diversity in speech along the dimensions of gender, age, and accent.

The researchers began by having an ASR system ingest sample data from CGN, an annotated corpus used to train AI language models to recognize the Dutch language. [...] When the researchers ran the trained ASR system through a test set derived from the CGN, they found that it recognized female speech more reliably than male speech regardless of speaking style. Moreover, the system struggled to recognize speech from older people compared with younger, potentially because the former group wasn't well-articulated. And it had an easier time detecting speech from native speakers versus non-native speakers. Indeed, the worst-recognized native speech -- that of Dutch children -- had a word error rate around 20% better than that of the best non-native age group. In general, the results suggest that teenagers' speech was most accurately interpreted by the system, followed by seniors' (over the age of 65) and children's. This held even for non-native speakers who were highly proficient in Dutch vocabulary and grammar.
One solution to remove the bias is to mitigate it at the algorithmic level. "[We recommend] framing the problem, developing the team composition and the implementation process from a point of anticipating, proactively spotting, and developing mitigation strategies for affective prejudice [to address bias in ASR systems]," the researchers wrote in a paper detailing their work.

"A direct bias mitigation strategy concerns diversifying and aiming for a balanced representation in the dataset. An indirect bias mitigation strategy deals with diverse team composition: the variety in age, regions, gender, and more provides additional lenses of spotting potential bias in design. Together, they can help ensure a more inclusive developmental environment for ASR."
AI

Stop Calling Everything AI, Machine-Learning Pioneer Says 116

An anonymous reader shares a report: Artificial-intelligence systems are nowhere near advanced enough to replace humans in many tasks involving reasoning, real-world knowledge, and social interaction. They are showing human-level competence in low-level pattern recognition skills, but at the cognitive level they are merely imitating human intelligence, not engaging deeply and creatively, says Michael I. Jordan, a leading researcher in AI and machine learning. Jordan is a professor in the department of electrical engineering and computer science, and the department of statistics, at the University of California, Berkeley. He notes that the imitation of human thinking is not the sole goal of machine learning -- the engineering field that underlies recent progress in AI -- or even the best goal. Instead, machine learning can serve to augment human intelligence, via painstaking analysis of large data sets in much the way that a search engine augments human knowledge by organizing the Web. Machine learning also can provide new services to humans in domains such as health care, commerce, and transportation, by bringing together information found in multiple data sets, finding patterns, and proposing new courses of action.

"People are getting confused about the meaning of AI in discussions of technology trends -- that there is some kind of intelligent thought in computers that is responsible for the progress and which is competing with humans," he says. "We don't have that, but people are talking as if we do." Jordan should know the difference, after all. The IEEE Fellow is one of the world's leading authorities on machine learning. In 2016 he was ranked as the most influential computer scientist by a program that analyzed research publications, Science reported. Jordan helped transform unsupervised machine learning, which can find structure in data without preexisting labels, from a collection of unrelated algorithms to an intellectually coherent field, the Engineering and Technology History Wiki explains. Unsupervised learning plays an important role in scientific applications where there is an absence of established theory that can provide labeled training data.
Intel

Arm Takes Aim at Intel Chips in Biggest Tech Overhaul in Decade (bloomberg.com) 57

Arm unveiled the biggest overhaul of its technology in almost a decade, with new designs targeting markets currently dominated by Intel, the world's largest chipmaker. From a report: The Cambridge, U.K.-based company is adding capabilities to help chips handle machine learning, a powerful type of artificial intelligence software. Extra security features will lock down data and computer code more. The new blueprints should also deliver 30% performance increases over the next two generations of processors for mobile devices and data center servers, said Arm, which is being acquired by Nvidia. The upgrades are needed to support the spread of computing beyond phones, PCs and servers, Arm said. Thousands of devices and appliances are being connected to the internet and gaining new capabilities through the addition of more chips and AI-powered software and services. The company wants its technology to be just as ubiquitous here as it is in the smartphone industry.
AI

OpenAI's Text-Generating System GPT-3 is Now Spewing Out 4.5 Billion Words a Day (theverge.com) 43

One of the biggest trends in machine learning right now is text generation. AI systems learn by absorbing billions of words scraped from the internet and generate text in response to a variety of prompts. It sounds simple, but these machines can be put to a wide array of tasks -- from creating fiction, to writing bad code, to letting you chat with historical figures. From a report: The best-known AI text-generator is OpenAI's GPT-3, which the company recently announced is now being used in more than 300 different apps, by "tens of thousands" of developers, and producing 4.5 billion words per day. That's a lot of robot verbiage. This may be an arbitrary milestone for OpenAI to celebrate, but it's also a useful indicator of the growing scale, impact, and commercial potential of AI text generation. OpenAI started life as a nonprofit, but for the last few years, it has been trying to make money with GPT-3 as its first salable product. The company has an exclusivity deal with Microsoft which gives the tech giant unique access to the program's underlying code, but any firm can apply for access to GPT-3's general API and build services on top of it./i
Programming

Will Programming by Voice Be the Next Frontier in Software Development? (ieee.org) 119

Two software engineers with injuries or chronic pain conditions have both started voice-coding platforms, reports IEEE Spectrum. "Programmers utter commands to manipulate code and create custom commands that cater to and automate their workflows." The voice-coding app Serenade, for instance, has a speech-to-text engine developed specifically for code, unlike Google's speech-to-text API, which is designed for conversational speech. Once a software engineer speaks the code, Serenade's engine feeds that into its natural-language processing layer, whose machine-learning models are trained to identify and translate common programming constructs to syntactically valid code...

Talon has several components to it: speech recognition, eye tracking, and noise recognition. Talon's speech-recognition engine is based on Facebook's Wav2letter automatic speech-recognition system, which [founder Ryan] Hileman extended to accommodate commands for voice coding. Meanwhile, Talon's eye tracking and noise-recognition capabilities simulate navigating with a mouse, moving a cursor around the screen based on eye movements and making clicks based on mouth pops. "That sound is easy to make. It's low effort and takes low latency to recognize, so it's a much faster, nonverbal way of clicking the mouse that doesn't cause vocal strain," Hileman says...

Open-source voice-coding platforms such as Aenea and Caster are free, but both rely on the Dragon speech-recognition engine, which users will have to purchase themselves. That said, Caster offers support for Kaldi, an open-source speech-recognition tool kit, and Windows Speech Recognition, which comes preinstalled in Windows.

AI

OpenAI's Sam Altman: AI-Generated Wealth Will Enable a $13,500-a-Year Basic Income (msn.com) 170

CNBC wrote recently, "Artificial intelligence will create so much wealth that every adult in the United States could be paid $13,500 per year from its windfall as soon as 10 years from now. So says Sam Altman, co-founder and president of San Francisco-headquartered, artificial intelligence-focused nonprofit OpenAI..." [I]f the government collects and redistributes the wealth that AI will generate, AI's exponential productivity gains could "make the society of the future much less divisive and enable everyone to participate in its gains," Altman says.... As the pace of development accelerates, AI "will create phenomenal wealth" but at the same time the price of labor "will fall towards zero," Altman said. "It sounds utopian, but it's something technology can deliver (and in some cases already has). Imagine a world where, for decades, everything — housing, education, food, clothing, etc. — became half as expensive every two years."

In this future, where wealth will come from companies and land, governments should tax capital, not labor, and those taxes should be distributed to citizens, Altman said. In his post, Altman proposed an American Equity Fund that taxes sufficiently large companies 2.5% of their market value in the form of company shares, and 2.5% of the value of all land in the form of dollars... All citizens over 18 would receive payment in both dollars and company shares.... "As people's individual assets rise in tandem with the country's, they have a literal stake in seeing their country do well," Altman said. With this system in mind, in 10 years, the 250 million adults living in America would get $13,500 per year, Altman said... "That dividend could be much higher if AI accelerates growth, but even if it's not, $13,500 will have much greater purchasing power than it does now because technology will have greatly reduced the cost of goods and services," Altman wrote. "And that effective purchasing power will go up dramatically every year."

Elon Musk has hinted at a similar future. "There is a pretty good chance we end up with a universal basic income, or something like that, due to automation," Musk told CNBC in 2016. "Yeah, I am not sure what else one would do. I think that is what would happen." Musk is also a co-founder of OpenAI but left the board in 2018 citing the fact that Tesla was becoming an AI company as it developed self-driving capabilities. Such a system is "both pro-business and pro-people," Altman said, and would therefore bring together "a remarkably broad constituency."

"The changes coming are unstoppable," Altman said. "If we embrace them and plan for them, we can use them to create a much fairer, happier, and more prosperous society. The future can be almost unimaginably great."

Google

Why a Young Professor Turned Down a $60,000 Research Grant From Google (cnn.com) 57

"When Luke Stark sought money from Google in November he had no idea he'd be turning down $60,000 from the tech giant in March," reports CNN: Stark, an assistant professor at Western University in Ontario, Canada, studies the social and ethical impacts of artificial intelligence. In late November, he applied for a Google Research Scholar award, a no-strings-attached research grant of up to $60,000 to support professors who are early in their careers. He put in for the award, he said, "because of my sense at the time that Google was building a really strong, potentially industry-leading ethical AI team...."

Gebru's ouster kicked off a months-long crisis for the company, including employee departures, a leadership shuffle, and an apology from Google's CEO for how the circumstances of Gebru's departure caused some employees to question their place there. Google conducted an internal investigation into the matter, results of which were announced on the same day the company fired Gebru's co-team leader, Margaret Mitchell, who had been consistently critical of the company on Twitter following Gebru's exit. (Google cited "multiple violations" of its code of conduct.) Meanwhile, researchers outside Google, particularly in AI, have become increasingly distrustful of the company's historically well-regarded scholarship and angry over its treatment of Gebru and Mitchell.

All of this came into sharp focus for Stark on Wednesday, March 10, when Google sent him a congratulatory note, offering him $60,000 for his proposal for a research project that would look at how companies are rolling out AI that is used to detect emotions. Stark said he immediately felt he needed to reject the award to show his support for Gebru and Mitchell, as well as those who yet remain on the ethical AI team at Google...

Gebru said she appreciated Stark's action.

Stark is the first person to turn down one of the 6,500 academic and research grants Google has given out over the last 15 years, the company tells CNN. But CNN also notes some AI conference organizers are now rethinking having Google as a sponsor.

"The widening fallout from Google's tensions with its ethical AI team now pose a risk to the company's reputation and stature in the AI community. This is crucial as Google battles for talent — both as employees at the company and names connected to it in the academic community."
AI

Watch AI Grow a Walking Caterpillar In Minecraft (sciencemag.org) 22

sciencehabit shares a report from Science Magazine: The video in this story will be familiar to anyone who's played the 3D world-building game Minecraft. But it's not a human constructing these castles, trees, and caterpillars -- it's artificial intelligence. The algorithm takes its cue from the "Game of Life," a so-called cellular automaton. There, squares in a grid turn black or white over a series of timesteps based on how many of their neighbors are black or white. The program mimics biological development, in which cells in an embryo behave according to cues in their local environment.

The scientists taught neural networks to grow single cubes into complex designs containing thousands of bricks, like the castle or tree or furnished apartment building above, and even into functional machines, like the caterpillar. And when they sliced a creation in half, it regenerated. (Normally in Minecraft, a user would have to reconstruct the object by hand.) Going forward, the researchers hope to train systems to grow not only predefined forms, but to invent designs that perform certain functions. This could include flying, allowing engineers to find solutions human designers would not have otherwise foreseen. Or tiny robots might use local interactions to assemble rescue robots or self-healing buildings.
The researchers presented their system in a paper posted on arXiv.
AI

ACLU To FOIA Information About National Security Uses of AI (axios.com) 12

The ACLU will be seeking information about how the government is using artificial intelligence in national security, Axios reported Friday. From a report: The development of AI has major implications for security, surveillance, and justice. The ACLU's request may help shed some light on the government's often opaque applications of AI. Later today the ACLU will be filing a broad Freedom of Information Act (FOIA) request to the CIA, the NSA, the Department of Homeland Security and other agencies concerning the government's use of AI, especially in the area of national security. "The problem with these AI systems is that they're black boxes," says Patrick Toomey, senior staff attorney at the ACLU National Security Project. "The public needs to know exactly what kinds of fundamental decisions about our lives the government is handing over to AI." The ACLU is specifically concerned about "vetting and screening processes in agencies like Homeland Security, and tools that can analyze voice, data and video," says Toomey. Another area of concern is the possibility that AI systems could be "biased against people of color, women and marginalized communities," he adds. "AI systems could be used to supercharge government activities to unfairly scrutinize communities through intrusive surveillance, questioning and even detention and watchlisting."
AI

AI At Work: Staff 'Hired and Fired By Algorithm' (bbc.com) 122

The Trades Union Congress (TUC) is calling for new legal protections for workers, warning that they could soon be "hired and fired by algorithm." "Among the changes it is calling for is a legal right to have any 'high-risk' decision reviewed by a human," reports the BBC. From the report: TUC general secretary Frances O'Grady said the use of AI at work stood at "a fork in the road." "AI at work could be used to improve productivity and working lives. But it is already being used to make life-changing decisions about people at work -- like who gets hired and fired. "Without fair rules, the use of AI at work could lead to widespread discrimination and unfair treatment -- especially for those in insecure work and the gig economy," she warned.

The union body is calling for:
- An obligation on employers to consult unions on the use of "high risk" or "intrusive" AI at work
- The legal right to have a human review decisions
- A legal right to "switch off" from work and not be expected to answer calls or emails
- Changes to UK law to protect against discrimination by algorithm


Hardware

Samsung Unveils 512GB DDR5 RAM Module (engadget.com) 33

Samsung has unveiled a new RAM module that shows the potential of DDR5 memory in terms of speed and capacity. Engadget reports: The 512GB DDR5 module is the first to use High-K Metal Gate (HKMG) tech, delivering 7,200 Mbps speeds -- over double that of DDR4, Samsung said. Right now, it's aimed at data-hungry supercomputing, AI and machine learning functions, but DDR5 will eventually find its way to regular PCs, boosting gaming and other applications. Developed by Intel, it uses hafnium instead of silicon, with metals replacing the normal polysilicon gate electrodes. All of that allows for higher chip densities, while reducing current leakage.

Each chip uses eight layers of 16Gb DRAM chips for a capacity of 128Gb, or 16GB. As such, Samsung would need 32 of those to make a 512GB RAM module. On top of the higher speeds and capacity, Samsung said that the chip uses 13 percent less power than non-HKMG modules -- ideal for data centers, but not so bad for regular PCs, either. With 7,200 Mbps speeds, Samsung's latest module would deliver around 57.6 GB/s transfer speeds on a single channel.

The Internet

LA Times Investigates Sneaker Resale Industry As Amazon Promotes It To Kids (latimes.com) 40

theodp writes: Sneakerheads like to complain about the one that got away," writes the L.A. Times' Ronald D. White. "About haunting sneaker apps and websites yet failing to win shoe-drop raffles or find what they want at semiaffordable prices. About how the system must be rigged by resellers using bots and inside connections. Now, a scandal involving a Nike executive and her reseller son is roiling the sneaker world, highlighting worst suspicions about a booming market in which shoes can be traded like stocks. For serious sneaker collectors, this is more than a tempest in a shoebox."

In a case of remarkably bad timing, just as the ethics of the lucrative sneaker resale industry came under scrutiny in the wake of the Nike scandal and questions were raised about exorbitant pandemic-fueled profits, Amazon launched a program for K-12 students that highlights how CS makes the sneaker resale marketplace gold rush possible. "Amazon and the AWS Services are really the backbone and foundation of how we do all of our work in Data Science," explains a GOAT Data Platform Engineer in an Amazon Future Engineer lesson that teaches kids how AI and data can be used to help flip sneakers by classifying GOAT website visitors as "Hype" ['willing to splurge'], "Core", or "Under Retail" user types.

Privacy

Amazon Delivery Drivers Forced To Sign 'Biometric Consent' Form or Lose Job (vice.com) 108

Amazon delivery drivers nationwide have to sign a "biometric consent" form this week that grants the tech behemoth permission to use AI-powered cameras to access drivers' location, movement, and biometric data. From a report: If the company's delivery drivers, who number around 75,000 in the United States, refuse to sign these forms, they lose their jobs. The form requires drivers to agree to facial recognition and other biometric data collection within the trucks they drive. "Amazon may... use certain Technology that processes Biometric Information, including on-board safety camera technology which collects your photograph for the purposes of confirming your identity and connecting you to your driver account," the form reads. "Using your photograph, this Technology, may create Biometric Information, and collect, store, and use Biometric Information from such photographs." It adds that "this Technology tracks vehicle location and movement, including miles driven, speed, acceleration, braking, turns, and following distance ...as a condition of delivery packages for Amazon, you consent to the use of Technology."

Slashdot Top Deals

All your files have been destroyed (sorry). Paul.

Working...