MSFT CEO Warns Against '1984' Future… Emotion Reading Tech 'spots Criminals' Before They Act… AI Detective Learns To Crack Cases

Yahoo News reports:

Microsoft chief executive Satya Nadella said Wednesday tech developers have a responsibility to prevent a dystopian “1984” future as the US technology titan unveiled a fresh initiative to bring artificial intelligence into the mainstream.

At the start of its annual Build Conference, Microsoft sought to showcase applications with artificial intelligence that could tap into services in the internet “cloud” and even take advantage of computing power in nearby machines.

Nadella spent time on stage at the Seattle conference stressing a need to build trust in technology, saying new applications must avoid dystopian futures feared by some.

Nadella’s presentation included images from George Orwell’s “1984” and Aldous Huxley’s “Brave New World” to underscore the issue of responsibility of those creating new technologies.

We are primarily funded by readers. Please subscribe and donate to support us!

“What Orwell prophesied in ‘1984,’ where technology was being used to monitor, control, dictate, or what Huxley imagined we may do just by distracting ourselves without any meaning or purpose — neither of these futures is something that we want,” he said

“The future of computing is going to be defined by the choices that you as developers make and the impact of those choices on the world.”

telegraph.co.uk reports:
Emotion reading technology could soon be used by police after a Russian firm created a tool that can identify people in a crowd and tell if they are angry, stressed or nervous.
The software, created by NTechLab, can monitor citizens for suspicious behaviour by tracking identity, age, gender and current emotional state. It could be used to pre-emptively stop criminals and potential terrorists.
“The recognition gives a new level of security in the street because in a couple of seconds you can identify terrorists or criminals or killers,” said Alexander Kabakov, NTechLab chief executive.
The emotion recognition tool is a new part of NTechLab’s facial recognition software, which made the headlines last year when it was used to power the FindFace app that can track down anyone on Russian social network VKontakte from a photo.
The identification app claims to have reconnected long-lost friends and family members, as well as helped police solve two cold cases and identify criminals.
newscientist.com reports:
A system called VALCRI should do the laborious parts of a crime analyst’s job in seconds, while also suggesting new lines of enquiry and possible motives
MOVE over, Sherlock. UK police are trialling a computer system that can piece together what might have happened at a crime scene. The idea is that the system, called VALCRI, will be able to do the laborious parts of a crime analyst’s job in seconds, freeing them to focus on the case, while also provoking new lines of enquiry and possible narratives that may have been missed.
“Everyone thinks policing is about connecting the dots, but that’s the easy bit,” says William Wong, who leads the project at Middlesex University London. “The hard part is working out which dots need to be connected.”
VALCRI’s main job is to help generate plausible ideas about how, when and why a crime was committed as well as who did it. It scans millions of police records, interviews, pictures, videos and more, to identify connections that it thinks are relevant. All of this is then presented on two large touchscreens for a crime analyst to interact with.

Spotting patterns

The system might spot that shell casings were found at several recent crime scenes including the one the police are focusing on now, for example. “An analyst can then say whether this is relevant or not and VALCRI will adjust the results,” says Neesha Kodagoda, also at Middlesex. Thanks to machine learning, the system improves its searches on the basis of such interactions with analysts, who can raise or lower the importance of different sets of criteria with a swipe.
When an unsolved crime lands on an analyst’s desk, one of the first things they have to do is search police databases for incidents that could be related based on their location, time or modus operandi, and collect details of all of the people involved. “An experienced analyst needs 73 individual searches to gather all of this information, before manually putting it into an easily digestible form,” says Kodagoda. “VALCRI can do this with a single click.”
This is no mean feat. A lot of the information recorded in police reports is in side notes and descriptions, but the algorithms powering VALCRI can understand what is written – at a basic level.
For example, interviews with people at three different crime scenes may describe an untidy person nearby. One person might have used the word “scruffy”, another “dishevelled” and the third “messy”. A human would have no trouble considering that all three might be describing the same person. Improvements in artificial intelligence mean VALCRI can make such links too. The system can also use face recognition software to identify people in CCTV footage or pictures taken at a scene.
West Midlands Police in the UK are currently testing VALCRI with three years’ worth of real but anonymised data, totalling around 6.5 million records. Police in Antwerp, Belgium, are trialling a version of the system too.
“Everyone thinks policing is about connecting the dots. We have to work out which dots need to be connected”
 
mashable.com reports:
Teaching a robot how to do something is usually done by either programming it to perform a specific task, or demonstrating that task for the robot to observe and imitate. The latter method, however, so far hasn’t been accurate enough for robots to be able to transfer their knowledge to other robots.

That’s changing, however, thanks to researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and a their new teaching method, called C-LEARN. That could have far-reaching consequences by making it easier for non-programmers to teach robots how to perform certain tasks. Even better, it allows robots to teach other robots how to perform the same tasks.
The system does this by giving the robot a knowledge base with information on how to reach and grab different objects. Then, using a 3D interface, the robot is shown a single demo on how to, say, pick up a cylinder or open a door. The task is divided into important moments called “keyframes” — steps that robot needs to take in order to correctly perform the task.

 

Views:

2 thoughts on “MSFT CEO Warns Against '1984' Future… Emotion Reading Tech 'spots Criminals' Before They Act… AI Detective Learns To Crack Cases”

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.