Why is Artificial Intelligence important?
The AI learning adventure explores intelligence and its connection to engineering and technology. Using ideas about human intelligence and intelligence more broadly, engineers can create “artificial intelligence,”; that is, impart “human” intelligence into machines or technology (Classical AI) or design technology that can itself “create” intelligence (future AI). In fact, understanding how the brain works—”reverse-engineering the brain”—and understanding how engineers design intelligent machines—machines that replicate human intelligence—is one of the “Grand Challenges of Engineering” as set forth by The National Academy of Engineering (NAE).
The implications and benefits of understanding the brain are many. In addition to advances in the treatment of brain injuries and diseases and advancements in communications technology and computer simulations, understanding the brain will allow the design of intelligent machines with even more signicant societal impacts. Already, machines that compute, perform voice or facial recognition, respond to human prompts, and sense and monitor human activity are routine in today’s society. The future capabilities of these machines—the limits and extremes of their “intelligence” and their ability to replicate human thinking—are dependent upon the engineer’s insight into human intelligence and the workings of the human brain.
One example of artificial intelligence is computers that try to appear more human in what they can do. (You may have seen “chatterbots” computers that act like humans, on shopping websites like Ikea, for example.) Other important themes in the learning adventure include how articial intelligence fits into the broader scope of technology's human roots and place in human society. For example, (“intelligent”) computers are the result of human invention and are used to further human goals.
The key applications of Artificial Intelligence are in any area that involves more data than humans can handle on our own, but which involves decisions simple enough that an AI can get somewhere with it. Big data, lots of little rote operations that add up to something useful. An example is image recognition; by doing rigorous, repetitive, low-level calculations on image features, we now have services like Google Goggles, where you take an image of something, say a landmark, and Google tries to recognize what it is. Services like these are the first stirrings of Augmented Reality (AR).
It’s easy to see how this kind of image recognition can be applied to repetitive tasks in biological research. One such difficult task is in brain mapping, an area that underlies dozens of transhumanist goals. The leader in this area is Sebastian Seung at MIT, who develops software to automatically determine the shape of neurons and locate synapses. Seung developed a fundamentally new kind of computer vision for automating work towards building connectomes, which detail the connections between all neurons. These are a key step to building computers that simulate the human brain.
As an example of how difficult it is to build a connectome without AI, consider the case of the flatworm, C. elegans, the only completed connectome to date. Although electron microscopy was used to exhaustively map the brain of this flatworm in the 1970s and 80s, it took more than a decade of work to piece this data into a full map of the flatworm’s brain. This is despite that brain containing just 7000 connections between 300 neurons. By comparison, the human brain contains 100 trillion connections between 100 billion neurons. Without sophisticated AI, mapping it will be hopeless.
There’s another closely related area that depends on AI to make progress; cognitive prostheses. These are brain implants that can perform the role of a part of the brain that has been damaged. Imagine a prosthesis that restores crucial memories to Alzheimer’s patients. The feasibility of a prosthesis of the hippocampus, part of the brain responsible for memory, was proven recently by Theodore Berger at the University of Southern California. A rat with its hippocampus chemically disabled was able to form new memories with the aid of an implant.
The way these implants are built is by carefully recording the neural signals of the brain and making a device that mimics the way they work. The device itself uses an artificial neural network, which Berger calls a High-density Hippocampal Neuron Network Processor. Painstaking observation of the brain region in question is needed to build a model detailed enough to stand in for the original. Without neural network techniques (a subcategory of AI) and abundant computing power, this approach would never work.
Bringing the overview back to more everyday tech, consider all the AI that will be required to make the vision of Augmented Reality mature. AR, as exemplified by Google Glass, uses computer glasses to overlay graphics on the real world. For the tech to work, it needs to quickly analyze what the viewer is seeing and generate graphics that provide useful information. To be useful, the glasses have to be able to identify complex objects from any direction, under any lighting conditions, no matter the weather. To be useful to a driver, for instance, the glasses would need to identify roads and landmarks faster and more effectively than is enabled by any current technology. AR is not there yet, but probably will be within the next ten years. All of this falls into the category of advances in computer vision, part of AI.
Finally, let’s consider some of the recent advances in building AI scientists. In 2009, “Adam” became the first robot to discover new scientific knowledge, having to do with the genetics of yeast. The robot, which consists of a small room filled with experimental equipment connected to a computer, came up with its’ own hypothesis and tested it. Though the context and the experiment were simple, this milestone points to a new world of robotic possibilities. This is where the intersection between AI and other transhumanist areas, such as life extension research, could become profound.
Many experiments in life science and biochemistry require a great deal of trial and error. Certain experiments are already automated with robotics, but what about computers that formulate and test their own hypotheses? Making this feasible would require the computer to understand a great deal of common sense knowledge, as well as specialized knowledge about the subject area. Consider a robot scientist like Adam with the object-level knowledge of the Jeopardy!-winning Watson supercomputer. This could be built today in theory, but it will probably be a few years before anything like it is built in practice. Once it is, it’s difficult to say what the scientific returns could be, but they could be substantial. We’ll just have to build it and find out.
That concludes this brief overview. There are many other interesting trends in AI, but machine vision, cognitive prostheses, and robotic scientists are among the most interesting, and relevant to futurist goals.
Great, this article is quite awesome and I have bookmarked this page for my future reference. Keep blogging like this with the latest info.
ReplyDeleteDevOps course in Chennai
Best DevOps Training in Chennai
AWS Training in Chennai
AWS Certification in Chennai
Cloud Computing Courses in Chennai
Cloud Training in chennai
AWS Training in Anna Nagar
DevOps Training in Chennai
Waterfall method is the traditional approach to software development. Scrum methodology takes feedback from the product owner and stakeholders.
ReplyDeleteScrum methodology