Connect with us

Artificial Intelligence

Robots need a new philosophy to get a grip — ScienceDaily



Robots have to know the explanation why they’re doing a job if they’re to successfully and safely work alongside folks within the close to future. In easy phrases, this implies machines want to grasp motivethe manner people do, and never simply carry out duties blindly, with out context.

In keeping with a brand new article by the Nationwide Centre for Nuclear Robotics, primarily based on the College of Birmingham, this might herald a profound change for the world of robotics, however one that’s obligatory.

Lead writer Dr. Valerio Ortenzi, on the College of Birmingham argues the shift in pondering can be obligatory as economies embrace automation, connectivity and digitisation (‘Business 4.0’) and ranges of human — robotic interplay, whether or not in factories or properties, improve dramatically.

The paper, printed in Nature Machine Intelligence, explores the problem of robots utilizing objects. ‘Greedy’ is an motion perfected way back in nature however one which represents the cutting-edge of robotics analysis.

Most factory-based machines are ‘dumb’, blindly selecting up acquainted objects that seem in pre-determined locations at simply the proper second. Getting a machine to choose up unfamiliar objects,randomly offered, requires the seamless interplay of a number of, advanced applied sciences. These embody imaginative and prescient programs and superior AI so the machine can see the goal and decide its properties (for instance, is it inflexible or versatile?); and probably, sensors within the gripper are required so the robotic doesn’t inadvertently crush an object it has been instructed to choose up.

Even when all that is achieved, researchers within the Nationwide Centre for Nuclear Robotics highlighted a elementary problem: what has historically counted as a ‘profitable’ grasp for a robotic would possibly really be a real-world failure, as a result of the machine doesn’t take into consideration what the purpose is and whyit is selecting an object up.

The Nature Machine Intelligence paper cites the instance of a robotic in a manufacturing facility selecting up an object for supply to a buyer. It efficiently executes the duty, holding the bundle securely with out inflicting harm. Sadly, the robotic’s gripper obscures a vital barcode, which suggests the thing cannot be tracked and the agency has no concept if the merchandise has been picked up or not; the entire supply system breaks down as a result of the robotic doesn’t know the implications of holding a field the incorrect manner.

Dr. Ortenzi and his co-authors give different examples, involving robots working alongside folks.

“Think about asking a robotic to move you a screwdriver in a workshop. Primarily based on present conventions the easiest way for a robotic to choose up the device is by the deal with. Sadly, that would imply {that a} massively highly effective machine then thrusts a probably deadly blade in direction of you, at pace. As a substitute, the robotic must know what the tip purpose is, i.e.,to move the screwdriver safely to its human colleague, in an effort to rethink its actions.

“One other state of affairs envisages a robotic passing a glass of water to a resident in a care house. It should make sure that it does not drop the glass but additionally that water does not spill over the recipient throughout the act of passing, or that the glass is offered in such a manner that the particular person can grab it.

“What is clear to people must be programmed right into a machine and this requires a profoundly completely different strategy. The normal metrics utilized by researchers, over the previous twenty years, to evaluate robotic manipulation, should not adequate. In essentially the most sensible sense, robots want a brand new philosophy to get a grip.”

Professor Rustam Stolkin, NCNR Director, stated, “Nationwide Centre for Nuclear Robotics is exclusive in engaged on sensible issues with trade, whereas concurrently producing the best calibre of cutting-edge tutorial analysis — exemplified by this landmark paper.”

The analysis was carried out in collaboration with the Centre of Excellence for Robotic Imaginative and prescient at Queensland College of Know-how, Australia, Scuola Superiore Sant’Anna, Italy, the German Aerospace Middle (DLR), Germany, and the College of Pisa, Italy.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Artificial Intelligence

High-quality Deepfake Videos Made with AI Seen as a National Security Threat



Deepfake movies so lifelike that they can’t be detected as fakes have the FBI involved about they pose a nationwide safety menace. (GETTY IMAGES)

By AI Developments Workers

The FBI is worried that AI is getting used to create deepfake movies which might be so convincing they can’t be distinguished from actuality.

The alarm was sounded by an FBI govt at a WSJ Professional Cybersecurity Symposium held just lately in San Diego. “What we’re involved with is that, within the digital world we reside in now, folks will discover methods to weaponize deep-learning techniques,” said Chris Piehota, govt assistant director of the FBI’s science and expertise division, in an account in WSJPro.

The expertise behind deepfakes and different disinformation ways are enhanced by AI. The FBI is worried pure safety could possibly be compromised by fraudulent movies created to imitate public figures. “Because the AI continues to enhance and evolve, we’re going to get to some extent the place there’s no discernible distinction between an AI-generated video and an precise video,” Piehota said.

Chris Piehota, govt assistant director, FBI science and expertise division

The phrase ‘deepfake’ is a portmanteau of “deep studying” and “faux.” It refers to a department of artificial media through which synthetic neural networks are used to generate faux photos or movies primarily based on an individual’s likeness.

The FBI has created its personal deepfakes in a check lab, which were capable of create synthetic personas that may go some measures of biometric authentication, Piehota said. The expertise can be used to create lifelike photos of people that don’t exist. And three-D printers powered with AI fashions can be utilized to repeat somebody’s fingerprints—to date, FBI examiners have been capable of inform the distinction between actual and synthetic fingerprints.

Menace to US Elections Seen

Some are fairly involved in regards to the affect of deepfakes on US democratic elections and on the perspective of voters. The AI-enhanced deepfakes can undermine the general public’s confidence in democratic establishments, even when confirmed false, warned Suzanne Spaulding, a senior adviser on the Heart for Strategic and Worldwide Research, a Washington-based nonprofit.

“It actually hastens our transfer in the direction of a post-truth world, through which the American public turns into just like the Russian inhabitants, which has actually given up on the concept of reality, and type of shrugs its shoulders. Folks will tune out, and that’s lethal for democracy,” she said within the WSJ Professional account.

Suzanne Spaulding, senior adviser, Heart for Strategic and Worldwide Research

Deepfake instruments depend on a expertise referred to as generative adversarial networks (GANs), a method invented in 2014 by Ian Goodfellow, a Ph.D. pupil who now works at Apple, in line with an account in Dwell Science.

A GAN algorithm generates two AI streams, one which generates content material resembling picture photos, and an adversary that tries to guess whether or not the photographs are actual or faux. The producing AI begins off with the benefit, that means its accomplice can simply distinguish actual from faux photographs. However over time, the AI will get higher and begins producing content material that appears lifelike.

For an instance, see NVIDIA’s mission which makes use of a GAN to create fully faux—and fully lifelike—photographs of individuals.

Instance materials is beginning to mount. In 2017, researchers from the College of Washington in Seattle educated a GAN can change a video of former President Barack Obama, so his lips moved in line with the phrases, however from a unique speech. That work was printed within the journal ACM Transactions on Graphics (TOG). In 2019, a deepfake may generate lifelike films of the Mona Lisa speaking, transferring and smiling in several positions. The approach can be utilized to audio information, to splice new phrases right into a video of an individual speaking, to make it seem they mentioned one thing they by no means mentioned.

All this may trigger attentive viewers to be extra cautious of content material on the web.

Excessive tech is attempting to subject a protection in opposition to deepfakes.

Google in October 2019 launched a number of thousand deepfake movies to assist researchers practice their fashions to acknowledge them, in line with an account in Wired. The hope is to construct filters that may catch deepfake movies the way in which spam filters determine e mail spam.

The clips Google launched had been created in collaboration with Alphabet subsidiary Jigsaw. They centered on expertise and politics, that includes paid actors who agreed to have their faces changed. Researchers can use the movies to benchmark the efficiency of their filtering instruments. The clips present folks doing mundane duties, or laughing or scowling into the digicam. The face-swapping is straightforward to identify in some situations and never in others.

Some researchers are skeptical this strategy shall be efficient. “The dozen or in order that I checked out have obvious artifacts that extra trendy face-swap methods have eradicated,” said Hany Farid, a digital forensics skilled at UC Berkeley who’s engaged on deepfakes, to Wired. “Movies like this with visible artifacts aren’t what we must be coaching and testing our forensic methods on. We’d like considerably larger high quality content material.”

Going additional, the Deepfake  Detection Problem competitors was launched in December 2019 by Fb — together with Amazon Internet Companies (AWS), Microsoft, the Partnership on AI, Microsoft, and teachers from Cornell Tech, MIT, College of Oxford, UC Berkeley; College of Maryland, Faculty Park; and State College of New York at Albany, in line with an account in VentureBeat.

Fb has budged greater than $10 million to encourage participation within the competitors; AWS is contributing as much as $1 million in service credit and providing to host entrants’ fashions in the event that they select; and Google’s Kaggle information science and machine studying platform is internet hosting each the problem and the leaderboard.

“‘Deepfake’ methods, which current lifelike AI-generated movies of actual folks doing and saying fictional issues, have important implications for figuring out the legitimacy of data offered on-line,” famous Fb CTO Mike Schroepfer in a weblog publish. “But the business doesn’t have an awesome information set or benchmark for detecting them. The [hope] is to supply expertise that everybody can use to higher detect when AI has been used to change a video in an effort to mislead the viewer.”

The info set accommodates 100,000-plus movies and was examined by means of a focused technical working session in October on the Worldwide Convention on Pc Imaginative and prescient, said Fb AI Analysis Supervisor Christian Ferrer.  The info doesn’t embrace any private consumer identification and options solely individuals who’ve agreed to have their photos used. Entry to the dataset is gated in order that solely groups with a license can entry it.

The Deepfake Detection Problem is overseen by the Partnership on AI’s Steering Committee on AI and Media Integrity. It’s scheduled to run by means of the tip of March 2020.

Learn the supply articles in  WSJPro,  Dwell Science, Wired and VentureBeat.

Continue Reading

Artificial Intelligence

Reinforcement learning’s foundational flaw



Reinforcement learning’s foundational flaw submitted by /u/Truetree9999

Continue Reading

Artificial Intelligence

Technique reveals whether models of patient risk are accurate



After a affected person has a coronary heart assault or stroke, docs usually use danger fashions to assist information their therapy. These fashions can calculate a affected person’s danger of dying primarily based on elements such because the affected person’s age, signs, and different traits.

Whereas these fashions are helpful typically, they don’t make correct predictions for a lot of sufferers, which may lead docs to decide on ineffective or unnecessarily dangerous therapies for some sufferers.

“Each danger mannequin is evaluated on some dataset of sufferers, and even when it has excessive accuracy, it’s by no means 100 p.c correct in apply,” says Collin Stultz, a professor {of electrical} engineering and pc science at MIT and a heart specialist at Massachusetts Normal Hospital. “There are going to be some sufferers for which the mannequin will get the flawed reply, and that may be disastrous.”

Stultz and his colleagues from MIT, IBM Analysis, and the College of Massachusetts Medical College have now developed a way that enables them to find out whether or not a selected mannequin’s outcomes could be trusted for a given affected person. This might assist information docs to decide on higher therapies for these sufferers, the researchers say.

Stultz, who can also be a professor of well being sciences and expertise, a member of MIT’s Institute for Medical Engineering and Sciences and Analysis Laboratory of Electronics, and an affiliate member of the Laptop Science and Synthetic Intelligence Laboratory, is the senior creator of the brand new research. MIT graduate pupil Paul Myers is the lead creator of the paper, which seems right this moment in Digital Medication.

Modeling danger

Laptop fashions that may predict a affected person’s danger of dangerous occasions, together with loss of life, are used extensively in drugs. These fashions are sometimes created by coaching machine-learning algorithms to research affected person datasets that embody quite a lot of details about the sufferers, together with their well being outcomes.

Whereas these fashions have excessive general accuracy, “little or no thought has gone into figuring out when a mannequin is more likely to fail,” Stultz says. “We are attempting to create a shift in the best way that individuals take into consideration these machine-learning fashions. Desirous about when to use a mannequin is de facto essential as a result of the consequence of being flawed could be deadly.”

For example, a affected person at excessive danger who’s misclassified wouldn’t obtain sufficiently aggressive therapy, whereas a low-risk affected person inaccurately decided to be at excessive danger may obtain pointless, doubtlessly dangerous interventions.

For instance how the strategy works, the researchers selected to deal with a extensively used danger mannequin known as the GRACE danger rating, however the approach could be utilized to almost any kind of danger mannequin. GRACE, which stands for International Registry of Acute Coronary Occasions, is a big dataset that was used to develop a danger mannequin that evaluates a affected person’s danger of loss of life inside six months after struggling an acute coronary syndrome (a situation brought on by decreased blood circulation to the center). The ensuing danger evaluation relies on age, blood stress, coronary heart charge, and different available medical options.

The researchers’ new approach generates an “unreliability rating” that ranges from zero to 1. For a given risk-model prediction, the upper the rating, the extra unreliable that prediction. The unreliability rating relies on a comparability of the danger prediction generated by a selected mannequin, such because the GRACE risk-score, with the prediction produced by a special mannequin that was educated on the identical dataset. If the fashions produce totally different outcomes, then it’s probably that the risk-model prediction for that affected person shouldn’t be dependable, Stultz says.

“What we present on this paper is, should you have a look at sufferers who’ve the very best unreliability scores — within the high 1 p.c — the danger prediction for that affected person yields the identical info as flipping a coin,” Stultz says. “For these sufferers, the GRACE rating can’t discriminate between those that die and people who don’t. It’s fully ineffective for these sufferers.”

The researchers’ findings additionally recommended that the sufferers for whom the fashions don’t work properly are usually older and to have the next incidence of cardiac danger elements.

One vital benefit of the strategy is that the researchers derived a components that tells how a lot two predictions would disagree, with out having to construct a very new mannequin primarily based on the unique dataset. 

“You don’t want entry to the coaching dataset itself in an effort to compute this unreliability measurement, and that’s essential as a result of there are privateness points that forestall these medical datasets from being extensively accessible to totally different individuals,” Stultz says.

Retraining the mannequin

The researchers at the moment are designing a consumer interface that docs may use to guage whether or not a given affected person’s GRACE rating is dependable. In the long term, additionally they hope to enhance the reliability of danger fashions by making it simpler to retrain fashions on knowledge that embody extra sufferers who’re much like the affected person being recognized.

“If the mannequin is easy sufficient, then retraining a mannequin could be quick. You may think about an entire suite of software program built-in into the digital well being document that will robotically let you know whether or not a selected danger rating is suitable for a given affected person, after which attempt to do issues on the fly, like retrain new fashions that may be extra applicable,” Stultz says.

The analysis was funded by the MIT-IBM Watson AI Lab. Different authors of the paper embody MIT graduate pupil Wangzhi Dai; Kenney Ng, Kristen Severson, and Uri Kartoun of the Middle for Computational Well being at IBM Analysis; and Wei Huang and Frederick Anderson of the Middle for Outcomes Analysis on the College of Massachusetts Medical College.

Continue Reading


LUXORR MEDIA GROUP LUXORR MEDIA, the news and media division of LUXORR INC, is an international multimedia and information news provider reaching all seven continents and available in 10 languages. LUXORR MEDIA provides a trusted focus on a new generation of news and information that matters with a world citizen perspective. LUXORR Global Network operates and via LUXORR MEDIA TV.

Translate »