ARTIFICIAL INTELLIGENCE - Era of precious time

Breaking

Post Top Ad

Responsive Ads Here

ARTIFICIAL INTELLIGENCE

      

How Humans and Robots Can Become Teammates.

In brief:  Researchers at MIT observed that language is a function of humans cooperating on tasks and imagined how robots might use language when working with humans to achieve some result. The work shows it's possible to use language to help train a robotic arm to perform a task, such as helping to prepare meals in a kitchen. The approach is something called "reinforcement learning," which has been exploited in recent years. Google's DeepMind unit used RL to train a computer program to beat humans at chess and the game of Go.  The robot program can query a human or make requests of a human. The experimental set is a kitchen, where a person is making sandwiches and the robot arm is pouring juice into cups. The robot and human share the space, so they must negotiate what each will do next, at each moment, so that they don't collide.

Why this is important: From the point of view of the robot's programming, the robot has to seek to maximize the efficiency of its actions in conjunction with what the human chooses to do; it has to take into account human intentions. In other words, the machine is programmed to cooperate, and work with the human to produce better outcomes. 

 

AI: Our Final Invention

In brief:  With AI figuring more and more in our daily lives, academic study about where it may lead has increased. James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era, interviewed a variety of leading technology figures and asked for their insights into the future of AI. Perhaps unsurprisingly, there was no consensus to be found. However, what does come up repeatedly is that as Artificial General Intelligence (AGI) grows, regulation of the technology will need to increase. Barrat compares AGI with nuclear fission – a technology with huge applications, but dangerous in the wrong hands. He argues that the transparent nature of reporting on AI advances may have unwanted repercussions if the information is used by somebody with bad intentions. Barrat argues that regulation is near-impossible and that limitations may have to be implemented in order to ensure public safety.  

Why this is important: With new threats on the horizon, the industry will have to adapt in order to defend itself. Companies are already shifting towards more ‘offensive AI’ systems and we are likely to see even more developments as people fight to protect their most valuable resource - data.

   How Chess Revolutionised AI

In brief:  In 1997, chess grandmaster Garry Kasparov was beaten by IBM’s Deep Blue supercomputer. This outcome is seen by many as a pivotal moment in the evolution of AI. Chess, a game based on logic, with strict binary rules, was the perfect vehicle for AI; all the potential moves can be preprogrammed and ML used to select the appropriate one. This article looks at the long history of chess and computer scientists. The match was in some ways a publicity stunt, but it served to highlight how far AI and ML had come. Initially, AI was able to triumph merely through brute force - and was criticised for it - but modern AI is now able to learn from its own mistakes. This new development was highlighted in 2017 when DeepMind’s algorithm AlphaZero beat the reigning artificial chess champion, Stockfish, by playing like a human, with intuition and finesse. 

Why this is important: Working in the field of data, we are often preoccupied with looking towards the future. However, there are lessons to be learned from the past as well. The relationship between chess, AI, and the desire to play a perfect game is continuing to push the boundaries of AI.

Click here to read on.
 

AI Advances Are Not All That They Seem

In brief:  It can sometimes seem as if every day we read a story about another AI triumph; scratch beneath the surface, however, and you often find that there is little real innovation. Davis Blalock, a computer science graduate student at the Massachusetts Institute of Technology (MIT), goes one step further by stating that some gains may not even exist. Blalock cites how research into 81 neural networks (NN) algorithms – all stating superiority – found that there had been little overall improvement in the field over a ten-year period. This article looks at several examples of where NN gains and AI advances actually show very little overall improvement, and in some cases reveal that any major gains were made decades ago. Blalock states that some legitimate advances exist: “where some of the businesses are not really working, but some are working spectacularly well.” 

Why this is important: Improvements in comparisons and benchmarking need to be made in order to have a true sense of our industry. Without meaningful comparisons, we are unable to have a real understanding of what the latest breakthrough may actually mean for the industry.

  AI vs. Cyberattacks

In brief:  Cyberattacks have become part of the modern way of life, with companies investing billions of dollars to protect their data from hackers. As ways to secure systems have become more advanced and innovative, so too have the means of attack. Following the 2017 ‘worms’ attack by WannaCry and NotPetya, which bypassed firewalls and crippled thousands of organisations, companies have turned to AI in order to provide greater protection. Machine algorithms have the advantage of speed and are less labour intensive than traditional security models. However, hackers are now also leveraging ML in order to develop more sophisticated modes of attack. They are employing malicious algorithms that have the ability to evade detection by adapting, learning, and continuously improving. AI-led attacks look likely to feature in the future, with a recent study by Forrester revealing that 88% of security professionals expect AI-driven attacks to become mainstream.

Why this is important: With new threats on the horizon, the industry will have to adapt in order to defend itself. Companies are already shifting towards more ‘offensive AI’ systems and we are likely to see even more developments as people fight to protect their most valuable resource - data.

  AI and the Far-Right

In brief:  There have been rather worrying news reports over the past month of links between prominent AI business leaders in the US and far-right organisations, such as the Ku Klux Klan. The number of cases is alarming, particularly when one realises the access that these individuals and their companies have to data and technologies from law enforcement organisations and city and state governments. This article looks at how these views could have disturbing implications for the technology that these companies produce. We have looked at bias within AI systems previously in these newsletters, and this blog post from Medium puts forward the idea that facial recognition software developed by these companies and used by US authorities may have inbuilt racial profiling, with bias against certain races and genders built into the algorithms and endemic in the industry.

Why this is important: The applications of AI are widespread and issues about their inclusivity have been raised before. If there is am undercurrent of extremism within the community, then it is possible that it has infiltrated the technology itself.  

  How AI Can Be Used to Stimulate the Economy

In brief:  For an economy to work in the best way possible, economists must strive to find income equality. This is one of the most difficult balances to achieve and settling on an acceptable and workable taxation rate is often fraught with difficulties, often relying on unverifiable assumptions. Economists are now turning to AI and RL to help develop tax policies that could be used as an economic stimulus. The technology was developed by Salesforce and is still in its infancy, but already shows promise in producing policies driven by data rather than political beliefs. Neural networks have previously been used to control agents in simulated economies, but what makes this tool different is the fact that within it, the policymaker is also an AI. This results in a model where workers and policymakers continually adapt and react as a consequence of the other’s actions.

Why this is important: By working towards developing a system that is more intelligent and less reliant on guesswork, we can hope to archive a fairer and more simulated economy.

  Companies Faking AI

In brief:  Sometimes the technical and data-based complexities of AI challenge technology companies to deliver on their promises. Some companies are choosing to approach these AI challenges not by scaling back their AI ambitions, but rather by using humans to do the task that they are otherwise trying to get their AI systems to do. This concept of humans pretending to be machines that are supposed to do the work of humans is called “pseudo-AI.” Most notably, a number of companies have been claiming to use AI in order to automate parts of their services such as transcription, appointment scheduling, and other personal assistant work, but in reality, have been outsourcing this work to humans through labor marketplaces. While AI is actually used for the solution in part or at all, these companies have not been truthful with their claims of using computers to perform these services. 

Why this is important: A number of pseudo-AI encounters such as Hanson Robotics, services X.AI and Clara Labs have made the news but there are no doubt many others who are pursuing the approach of using humans as a “stop-gap” when the AI systems are deficient

No comments:

Post a Comment

Post Top Ad

Responsive Ads Here