Artificial Intelligence (AI) is the key technology in many of today’s novel applications, ranging from banking systems that detect attempted credit card fraud, to telephone systems that understand speech, to software systems that notice when you’re having problems and offer appropriate advice. Intellectually, AI depends on a broad intercourse with computing disciplines and with fields outside computer science, including logic, psychology, linguistic, philosophy, neuroscience, mechanical engineering, and statistics, economics, and control theory. AI problems are extremely difficult, far more difficult than was imagined when the field was founded. Early work in AI focused on using cognitive and biological models to stimulate and explain human information processing skills, on “logical” systems that perform common sense and expert reasoning, and on robots that perceive and interact with their environment. Today developers can build systems that meet the advanced information needs of government and industry by choosing from a broad palette of mature technologies. Sophisticated methods for reasoning about uncertainty and for coping with incomplete knowledge have led to more robust diagnostic and planning systems.
AI began as an attempt to answer some of the most fundamental questions about human existence by understanding the nature of intelligence, but it has grown into a scientific and technological field affecting many aspects many aspects of commerce and society. Even as AI technology becomes integrated into the fabric of everyday life, AI researches remain focused on the grand challenges of automating intelligence. Work is progressing on developing systems that converse in natural language, that perceive and respond to their surroundings, and that encode and provide useful access to all of human knowledge and expertise. The pursuit of the ultimate goals of AI—the design of intelligent artifacts; understanding of human intelligence; abstract understanding of intelligence (possibly superhuman)—continues to have practical consequences in the form of new industries, enhanced functionality for existing systems, increased productivity in general and improvements in the quality of life. But the ultimate promises of AI are still decades away, and the necessary advances in knowledge and technology will require a sustained fundamental research effort.
... person can expect career growth without a working knowledge of technology, especially as it interfaces with the all important ... I also recommend systems that will be appropriate fir the Quickshop. II. Abstract The phrase “technology revolution” has become ... , payroll, credit decisions, flow of materials. • Knowledge-level Systems – Support knowledge and data workers in an organisation. The purpose of ...
Introduction to Artificial Intelligence:
In order to classify machines as “thinking”, it is necessary to define intelligence. In a broad sense we can define intelligence as, the ability to learn and to perform adequate actions in a given situation. In other words, “Intelligence is the adequate response to a stimulus.” One of the most challenging approaches facing experts in building systems that mimic the behavior of the human brain, made up of billions of neurons and arguably the most complex matter in the universe. Perhaps the best way to gauge the intelligence of a machine is British computer scientist Alan Turing’s test. He stated that a computer would deserve to be called intelligent if it could deceive a human into believing that it was human.
AI really began to intrigue researchers with the invention of the computer in 1943. The technology was finally available, or so it seemed, to stimulate intelligent behavior. Over the next four decades, despite many stumbling blocks, AI has grown from a dozen researchers, to thousands of engineers and specialists; and from programs capable of playing checkers, to systems designed to diagnose disease.
History of AI:
The term artificial intelligence was first coined in 1956, at the Dartmouth conference, and since then artificial intelligence has expanded because of the theories and principles by its dedicated researchers.
... it measures the intelligence in many different contexts. Intelligent machines cannot really serve any practical purpose unless the computers could cope with ... be described as the attempt to build machines that think and act like humans, that are able to learn and to ... : 1) To test psychology theories of human performance 2) To enable computers to understand human reasoning. 3) To enable people to ...
Aristotle (384-322 BC) developed an informal system of syllogistic logic, which is the basis of the first formal deductive reasoning system.
Early in the 17th century, Descartes proposed that bodies of animal are nothing more than complex machines.
Pascal in 1642 made the first mechanical digital calculating machine.
In the 19th century, George Boole developed a binary algebra representing “Laws of Thought”.
Charles Baggage and Ada Byron worked on programmable mechanical calculating machines.
In the late 19th century and early 20th century, mathematical philosophers like Gottlob Frege, Bertram Russell, Alfred North Whitehead, and Kurt Gödel built on Boole’s initial logic concepts to develop mathematical representations of logic problems.
The advent of electronic computers provided a revolutionary advance in the ability to study intelligence.
In 1943, McCulloch and Pitts developed a Boolean circuit model of brain. They wrote the paper “A Logical Calculus of Ideas Immanent in Nervous Activity”, which explained how it is possible for neural networks to compute.
Marvin Minsky and Dean Edmonds built the SNARC in 1951, which is the first randomly wired neural network learning machine (SNARC stands for Stochastic Neural-Analog Reinforcement Computer).
It was a neural network computer that used 3000 vacuum tubes and a network with 40 neurons.
In 1950 Turing wrote an article on “Computing Machinery and Intelligence” which articulated a complete vision of AI.
Alan M. Turing, who as early as 1934 had theorized that machine could imitate thought, proposed a test for AI machines in his 1950 essay “Computing Machinery and Intelligence.” The Turing Test calls for a panel of judges to review typed answer to any question that has been addressed to both a computer and a human. If the judges can make no distinctions between the two answers, the machine may be considered intelligent.
Although the computer provided the technology necessary for AI, it was not until the early 1950’s that the link between human intelligence and machines was really observed. In late 1955, Newell and Simon developed “The Logic Theorist”, Considered by many to be the first AI program. The program, representing each problem as a tree model, would attempt to solve it by selecting the branch that would most likely result in the correct conclusion. The impact that the logic theorist made on both the public and the field of AI has made it a crucial stepping stone in developing the AI field.
... displays, it receives its data digitally from the computer, preserving the highest quality image. Mac OS 9 ... way to get the most of your Macintosh computer and the Internet. Sherlock 2. In addition ... door. With PowerPC G4 with Velocity Engine, the computer speeds up to 450MHz, one megabyte of backside ... at half the processor speed, and a 100MHz system bus supporting up to 800-megabytes-per-second ...
In 1941, an invention revolutionized every aspect of the storage and processing of information. That invention, developed in both the US and Germany was the electronic computer. The first computer required large, separate air conditioned rooms, and were a programmers nightmare, involving the separate configuration of thousands of wires to even get a program running. The 1949 innovation, the stored program computer, made the job of entering a program easier, and advancements in computer theory lead to computer science, and eventually artificial intelligence.
With the invention of an electronic means of processing data, came a medium that made AI possible.
In 1956 John McCarthy regarded as the father of AI organized a conference to draw the talent and expertise of others interested in machine intelligence for a month of brainstorming. He invited the to Vermont for “The Dartmouth summer research project on artificial intelligence.” From that point on, because of McCarthy, the field would be known as Artificial Intelligence. Although not a huge success, the Dartmouth conference did bring together the founders of AI, and served to lay the groundwork for the future of AI research.
The years from 1969 to 1979 marked the early development of knowledge-based systems.
In 1974, MYCIN demonstrated the power of rule-based system for knowledge representation and inference in medical diagnosis and therapy. Knowledge representation schemes were developed. These included frames developed by Minsky Logic based language like Prolog and Planner were developed.
... proven to be more productive. Practical applications of such computers with artificial intelligence could really be endless in the world. One such ... beings without work. When computers were first developed in the 1950’s, the hype about how machines could think like human ... the human user. In consequence, only few interactive multimodal systems exist that utilizes laughter in interaction” (Platt et Al ...
We will now mention a few systems that were developed over the years.
The Meta-Dendral learning program produced new results in chemistry (rules of mass spectrometry).
In the 1980s, LISP Machines developed and marketed.
Around 1985, Neural Networks return to popularity.
In 1988, there was a resurgence of probabilistic and decision theoretic methods.
The early AI systems used general systems, little knowledge. AI researches realized that specialized knowledge is required that specialized knowledge is required for rich tasks to focus reasoning.
The 1990’s saw major advances in all areas of AI including the following:
• Machine learning, data mining
• Intelligent tutoring,
• Case-based reasoning,
• Multi-agent planning, scheduling,
• Uncertain reasoning,
• Natural language understanding and translation,
• Vision, virtual reality, games, and other topics.
Rod Brooks’ COG Project at MIT, with numerous collaborators, made significant progress in building a humanoid robot
The first official Robo-Cup soccer match featuring table-top matches with 40 teams of interacting robots was held in1997.
In the late 90s, Web crawlers and other AI-based information extraction programs become essential in widespread use of the world-wide-web.
Interactive robot pets (“smart toys”) become commercially available, realizing the vision of the 18th century novelty toy makers. In 2000, the Nomad robot explores remote regions of Antarctica looking for meteorite samples.
The Use of AI:
One of the questions that appear talking about artificial intelligence is where we are going to use it. After all there must be the reason why scientists are trying to create the artificial intelligence.
The banks use the systems of the artificial intelligence in the insurance activity, during the stock market game and the management of the property. In August 2001 the robots got the victory over people in the improvised competition in trading. The methods of image distinction (including the more complicated and the specialized as well as the neural networks) are widely used in the optical and acoustical distinction (including the text and the talk), medical diagnostics, spam-filters, in the systems of the aims identification and in many other systems of the national security.
... your decision makers. BIAS works with a variety of computer environments, including IBM Mainframe, UNIX, OS/2, PC ... Workload Management: Allocates work assignments and routes alerts between system users. Application Processing: Accepts entry of an application ... company developed its flagship software product: Business Information Analysis System (BIAS). BIAS was originally produced to allow mainframe ...
The creators of the computer games have to use the achievements of the AI of all investigating levels. The standard tasks of the AI in games are the search of the way in 2D and 3D spaces, the imitation of the combat unit behaviour, the design of the economical strategy, etc.
The artificial intelligence is connected with the trance humanism a lot. Together with the neural physics and cognitive psychology it makes the general science that is called cognitive science. Philosophy plays an important role in the artificial intelligence formation.
Applications of AI:
The applications of AI are abundant and widespread, especially in developed countries. In fact, Artificial Intelligence has become such a mainstay in today’s world that it is taken for granted by the majority of people who benefit from its efficiency. Air conditioners, cameras, video games, medical equipment, traffic lights, and refrigerators: all function by way of developments in “smart” technology or fuzzy logic. Large financial and insurance institutions rely heavily on Artificial Intelligence to process the huge quantities of information that are fundamental to their business practices.
The application of computer speech recognition, though more limited in utilization and practical convenience, has made it possible to interact with computers by using speech instead of writing.
Computer vision instructs computers on how to comprehend images and scenes. It has as some of its goals: image recognition, image tracking and image mapping. This application is valued in the fields of medicine, security, surveillance, military operations, even movie-making. AI technology, while new, is so pervasive that it has already become a critically important component in many other existing technologies. The military and the science of computers has always been incredibly closely tied – in fact, the early development of computing was virtually exclusively limited to military purposes. The very first operational use of a computer was the gun director used in the Second World War to aid ground gunners to predict the path of a plane given its radar data. Famous names in AI, such as Alan Turing, were scientists that were heavily involved in the military. Turing, recognized as one of founders of both contempory computer science and artificial intelligence, was the scientist who broke the German’s Enigma code through the use of computers.
... explanation for the evolution of human intelligence. He suggests that sexual selection explains human intelligence as it includes both ecological ... factors, like cultural influences. Also, Explanations of human intelligence based on evolutionary theory can be argued to ... , but have small brains. Evolutionary explanations of human intelligence are reductionist, as they reduce complex behaviours to ...
Genetic Engineering, Neural Networks and Pattern Recognition are other good examples of applications of AI.
AI can be used in music in many different ways, it can be used both to compose (create music) and transpose (create written music from listening to pieces).
Getting computers to compose well is an incredibly hard task. Computers that compose often require human input to determine whether the music sounds good or not.
One program that does not quite require this is a program called variations, developed by Bruce L. Jacob. This program uses genetic algorithms to compose and listen to the piece to decide whether or not it is good.
AI techniques are found today in thousands of useful applications: smart Internet search engines that read the pages before recommending them to you; ecommerce collaborative filters that know your tastes in books and movies better than you know them yourself; diagnostic tools that analyze MRI images to find tumors and monitor treatments; biomedical informatics systems that understand the human genome and search for new drug therapies; neural networks that control our phone systems and keep service going during disasters; simulation games that generate whole worlds of characters interacting with your guidance; set-top boxes that monitor your TV watching and anticipate your entertainment preferences; data mining algorithms that search for terrorist bank accounts; biometric algorithms that recognize your face, voice and fingerprints
Artificial Intelligence (AI) is a perfect example of how sometimes science moves more slowly than we would have predicted. In the first flush of enthusiasm at the invention of computers it was believed that we now finally had the tools with which to crack the problem of the mind, and within years we would see a new race of intelligent machines. We are older and wiser now. The first rush of enthusiasm is gone, the computers that impressed us so much back then do not impress us now, and we are soberly settling down to understand how hard the problems of AI really are. In some sense AI is engineering inspired by biology. We look at animals, we look at humans and we want to be able to build machines that do what they do. We want machines to be able to learn in the way that they learn, to speak, to reason and eventually to have consciousness.
Looking back at the history of AI, we can see that perhaps it began at the wrong end of the spectrum. If AI had been tackled logically, it would perhaps have begun as an artificial biology, looking at living things and saying “Can we model these with machines?” The working hypothesis would have been that living things are physical systems so let’s try and see where the modeling takes us and where it breaks down. Artificial biology would look at the evolution of physical systems in general, development from infant to adult, self-organization, complexity and so on. Then, as a subfield of that, a sort of artificial zoology that looks at sensorimotor behavior, vision and navigation, recognizing, avoiding and manipulating objects, basic, pre-linguistic learning and planning, and the simplest forms of internal representations of external objects.
The argument I am developing is that there may be limits to AI, not because the hypothesis of `strong AI’ is false, but for more mundane reasons. What will happen over the next thirty years is that will see new types of animal-inspired machines that are more `messy’ and unpredictable than any we have seen before. These machines will change over time as a result of their interactions with us and with the world. These silent, pre-linguistic, animal-like machines will be nothing like humans but they will gradually come to seem like a strange sort of animal. Machines that learn, familiar to researchers in labs for many years, will finally become mainstream and enter the public consciousness.
A special focus will be behavior, which is easier to learn than to articulate – most of us know how to walk but we couldn’t possibly tell anyone how we do it. Similarly with grasping objects and other such skills. These things involve building neural networks, filling in state-spaces and so on, and cannot be captured as a set of rules that we speak in language. You must experience the dynamics of your own body in infancy and thrash about until the changing internal numbers and weights start to converge on the correct behavior. Different bodies mean different dynamics. And robots that can learn to walk can learn other sensorimotor skills that we can neither articulate nor perform ourselves.
Whether these types of machines may have a future in the home is an interesting question. If it ever happens, I think it will be because the robot is treated as a kind of pet, so that a machine roaming the house is regarded as cute rather than creepy. Machines that learn, tend to develop an individual, unrepeatable character which humans can find quite attractive. There are already a few games in software – such as the Windows-based game Creatures, and the little Tamagotchi toys – whose personalities people can get very attached to. A major part of the appeal is the unique, fragile and unrepeatable nature of the software beings you interact with. If your Creature dies, you may never be able to raise another one like it again. Machines in the future will be similar, and the family robot will after a few years be, like a pet, literally irreplaceable.
Now the basic question is yet to be answered:
1. Did Artificial Intelligence, fulfilled the promises it made?
2. Can Artificial Intelligence solve real time problems?
3. If the answer to the above questions is YES, then how it has effected human beings?
4. And if the answer is NO, then what are the future predictions?
The solution to the above questions is a little bit difficult thing to do.
Anyways, coming to the first question, “did AI fulfill its promises”, I think AI has not been able to achieve what it intended to achieve by now.
Many promises have been made by the researchers initially that there will be machines thinking at the level of humans and learn from their mistakes. But, I think no such progress has been made and it actually takes much more time to built such a machine which can learn from its past experience. But, we can tell that it has made enough progress to built machines which can perform certain tasks but cannot completely imitate humans. So, the bottom line is that the promises made look far ahead and may take very long time to actually fulfill its promises.
Now coming to the next question, “can AI solve real problems?”,
The expert systems are the most advanced part of AI, and expert systems are in wide commercial use. Uses of expert systems include medical diagnosis, chemical analysis, credit authorization, financial management, corporate planning, corporate planning, document routing in financial institutions, oil and mineral prospecting, genetic engineering, camera lens design, computer installation design, airline scheduling, cargo placement, and the provision of an automatic customer help service for home computer owners.
From the above mentioned uses of AI in various fields, we cannot conclude that AI has not done enough progress from the moment it got initiated. It has done enough progress and still efforts are being made to improve the current standards. AI can solve real problems but only to some extend as it cannot imitate human or can learn from its past experience.
Let’s make a few predictions that we’ll later look back and laugh at.
First, family robots may be permanently connected to wireless family intranets, sharing information with those who you want to know where you are. You may never need to worry if your loved ones are alright when they are late or far away, because you will be permanently connected to them. Crime may get difficult if all family homes are full of half-aware, loyal family machines. In the future, we may never be entirely alone, and if the controls are in the hands of our loved ones rather than the state, that may not be such a bad thing. Slightly further ahead, if some of the intelligence of the horse can be put back into the automobile, thousands of lives could be saved, as cars become nervous of their drunk owners, and refuse to get into positions where they would crash at high speed. We may look back in amazement at the carnage tolerated in this age, when every western country had road deaths equivalent to a long, slow-burning war. In the future, drunks will be able to use cars, which will take them home like loyal horses. And not just drunks, but children, the old and infirm, the blind, all will be empowered.
Eventually, if cars were all (wireless) networked, and humans stopped driving altogether, we might scrap the vast amount of clutter all over our road system – signposts, markings, traffic lights, roundabouts, central reservations – and return our roads to a soft, sparse, eighteenth-century look. All the information – negotiation with other cars, traffic and route updates – would come over the network invisibly. And our towns and countryside would look so much sparser and more peaceful.
In this Paper I have been trying to illustrate the progress made by AI and b its researchers till date. I’ve been trying to give an idea of how artificial animals could be useful, but the reason that I’m interested in them is the hope that artificial animals will provide the route to artificial humans. But the latter is not going to happen in our lifetimes (and indeed may never happen, at least not in any straightforward way).
In the coming decades, we shouldn’t expect that the human race will become extinct and be replaced by robots. We can expect that classical AI will go on producing more and more sophisticated applications in restricted domains – expert systems, chess programs, Intelligent agents – but any time we expect common sense, we will continue to be disappointed as we have seen in the past. At vulnerable points these will continue to be exposed as `blind automata’. Whereas animal-based AI will go on producing stranger and stranger machines, less rationally intelligent but more rounded and whole, in which we will start to feel that there is somebody at home, in a strange animal kind of way. In conclusion, we won’t see full AI in our lives, but we should live to get a good feel for whether or not it is possible, and how it could be achieved by our descendants.
As artificial intelligence moves out of the research labs and into the commercial world, the next generation of digital systems will be smarter, more independent and more powerful than the hand-crafted computer programs that came before. Companies that have clever robots will out-compete those with merely good programmers. Managers who understand the power of intelligent systems will replace those who just know data processing. And people in all walks of life who understand how digital intelligence works will apply AI to new ways of working, playing and living that expand human potential and define a new relationship between people and machines.