Tuesday, March 11, 2008
Subscribe to:
Post Comments (Atom)
Andres Agostini is a Researching Analyst & Consultant & Management Practitioner & Original Thinker & E-Author & Institutional Coach. Topics subject of in-depth study & practice are Science, Technology, Corporate Strategy, Business, Management, “Transformative Risk Management,” Professional Futurology, & Mind-Expansion Techniques Development. He hereby shares his thoughts, ideas, reflections, findings, & suggestions with total independence of thinking and without mental reservation.
To disseminate new ideas, hypothesis, thesis, original thinking, new proposals to reinvent theory pertaining to Strategy, Innovation, Performance,Risk (all kinds), via Scientific and Highly-Sophisticated Management, in accordance with the perspective of applied omniscience (the perspective of totality of knowledge). Put simply, to research an analyze news ways to optimize the best practices to an optimum degree.
WHO IS ANDY AGOSTINI?
“Put simply, an inspired, determined soul, with an audacious style of ingrained womb-to-tomb thinking from the monarchy of originality, who starvingly seeks and seeks and seeks —in real-time—the yet unimagined futures in diverse ways, contexts, and approaches, originated in the FUTURE. A knowledge-based, pervasive rebellious, ‘type A Prima Donna’, born out of extraterrestrial protoplasm, who is on a rampant mission to (cross) research science (state of the art from the avant-garde) progressively, envision, and capture a breakthrough foresight of what is/what might be/what should be, still to come while he marshals his ever-practicing, inquisitive future-driven scenarios, via his Lines of Practice and from the intertwined, intersected, chaotically frenzy stances that combine both subtlety and brute force with the until now overwhelmingly unthinkable.”
Andres Agostini
www.AndyBelieves.blogspot.com
11:10 p.m. (GMT / UTC)
Monday, March 24, 2008
(more)
We are living in extreme times. As Global Risk Manager and Scenario Strategists I know we have the technology and science to solve many existential risks. The problem is that the world is over-populated by –as it seems- a majority of psycho-stable people. For the immeasurable challenges we need to face and act upon them, we will require a majority of extremely educated (exact sciences) people who are psycho-kinetic minded. People who have an unlimited drive to do things optimally, that are visionaries. That will go all the way to make peace universal and so the best maintenance of ecology. One life-to-death risk is a nuclear war. There are too many alleged statesmen willing to pull to switch to quench their mediocre egos. If we can manage systematically, systematically, and holistically the existential risks (including the ruthless progression of science and technology), the world (including some extra-Erath stations) a promissory place. The powers and the superpowers must all “pull” at the unison to mitigate/eliminate these extraordinarily grave risks.
Andres Agostini
www.AndyBelieves.blogspot.com/
9:32 p.m. GMT/UCT
March 14, 2008
NAPOLEON ON EDUCATION:
(Literally. Brackets are placed by Andres Agostini.
Content researched by Andres Agostini)
“….Education, strictly speaking, has several objectives: one needs to learn how to speak and write correctly, which is generally called grammar and belles lettres [fines literature of that time]. Each lyceum [high school] has provided for this object, and there is no well-educated man who has not learned his rhetoric.
After the need to speak and write correctly [accurately and unambiguously] comes the ability to count and measure [skillful at mathematics, physics, quantum mechanics, etc.]. The lyceums have provided this with classes in mathematics embracing arithmetical and mechanical knowledge [classic physics plus quantum mechanics] in their different branches.
The elements of several other fields come next: chronology [timing, tempo, in-flux epochs], geography [geopolitics plus geology plus atmospheric weather], and the rudiments of history are also a part of the education [sine qua non catalyzer to surf the Intensively-driven Knowledge Economy] of the lyceum. . . .
A young man [a starting, independent entrepreneur] who leaves the lyceum at sixteen years of age therefore knows not only the mechanics of his language and the classical authors [captain of the classic, great wars plus those into philosophy and theology], the divisions of discourse [the structure of documented oral presentations], the different figures of eloquence, the means of employing them either to calm or to arouse passions, in short, everything that one learns in a course on belles lettres.
He also would know the principal epochs of history, the basic geographical divisions, and how to compute and measure [dexterity with information technology, informatics, and telematics]. He has some general idea of the most striking natural phenomena [ambiguity, ambivalence, paradoxes, contradictions, paradigm shits, predicaments, perpetual innovation, so forth] and the principles of equilibrium and movement both [corporate strategy and risk-managing of kinetic energy transformation pertaining to the physical world] with regard to solids and fluids.
Whether he desires to follow the career of the barrister, that of the sword [actual, scientific war waging in the frame of reference of work competition], OR ENGLISH [CENTURY-21 LINGUA FRANCA, MORE-THAN-VITAL TOOL TO ACCESS BASIC THROUGH COMPLEX SCIENCE], or letters; if he is destined to enter into the body of scholars [truest womb-to-tomb managers, pundits, experts, specialists, generalists], to be a geographer, engineer, or land surveyor—in all these cases he has received a general education [strongly dexterous of two to three established disciplines plus a background of a multitude of diverse disciplines from the exact sciences, social sciences, etc.] necessary to become equipped [talented] to receive the remainder of instruction [duly, on-going-ly indoctrinated to meet the thinkable and unthinkable challenges/responsibilities beyond his boldest imagination, indeed] that his [forever-changing, increasingly so] circumstances require, and it is at this moment [of extreme criticality for humankind survival], when he must make his choice of a profession, that the special studies [omnimode, applied with the real-time perspective of the totality of knowledge] science present themselves.
If he wishes to devote himself to the military art, engineering, or artillery, he enters a special school of mathematics [quantum information sciences], the polytechnique. What he learns there is only the corollary of what he has learned in elementary mathematics, but the knowledge acquired in these studies must be developed and applied before he enters the different branches of abstract mathematics. No longer is it a question simply of education [and mind’s duly formation/shaping], as in the lyceum: NOW IT BECOMES A MATTER OF ACQUIRING A SCIENCE....”
END OF TRANSCRIPTION.
Artificial intelligence (or AI) is both the intelligence of machines and the branch of computer science which aims to create it.
Major AI textbooks define artificial intelligence as "the study and design of intelligent agents,"[1] where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success.[2] AI can be seen as a realization of an abstract intelligent agent (AIA) which exhibits the functional essence of intelligence.[3] John McCarthy, who coined the term in 1956,[4] defines it as "the science and engineering of making intelligent machines."[5]
Among the traits that researchers hope machines will exhibit are reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects.[6] General intelligence (or "strong AI") has not yet been achieved and is a long-term goal of AI research.[7]
AI research uses tools and insights from many fields, including computer science, psychology, philosophy, neuroscience, cognitive science, linguistics, ontology, operations research, economics, control theory, probability, optimization and logic.[8] AI research also overlaps with tasks such as robotics, control systems, scheduling, data mining, logistics, speech recognition, facial recognition and many others.[9] Other names for the field have been proposed, such as computational intelligence,[10] synthetic intelligence,[10] intelligent systems,[11] or computational rationality.[12]
Contents[hide] |
Humanity has imagined in great detail the implications of thinking machines or artificial beings. They appear in Greek myths, such as Talos of Crete, the golden robots of Hephaestus and Pygmalion's Galatea.[13] The earliest known humanoid robots (or automatons) were sacred statues worshipped in Egypt and Greece, believed to have been endowed with genuine consciousness by craftsman.[14] In medieval times, alchemists such as Paracelsus claimed to have created artificial beings.[15] Realistic clockwork imitations of human beings have been built by people such as Yan Shi,[16] Hero of Alexandria,[17] Al-Jazari[18] and Wolfgang von Kempelen.[19] Pamela McCorduck observes that "artificial intelligence in one form or another is an idea that has pervaded Western intellectual history, a dream in urgent need of being realized."[20]
In modern fiction, beginning with Mary Shelley's classic Frankenstein, writers have explored the ethical issues presented by thinking machines.[21] If a machine can be created that has intelligence, can it also feel? If it can feel, does it have the same rights as a human being? This is a key issue in Frankenstein as well as in modern science fiction: for example, the film Artificial Intelligence: A.I. considers a machine in the form of a small boy which has been given the ability to feel human emotions, including, tragically, the capacity to suffer. This issue is also being considered by futurists, such as California's Institute for the Future under the name "robot rights",[22] although many critics believe that the discussion is premature.[23][24]
Science fiction writers and futurists have also speculated on the technology's potential impact on humanity. In fiction, AI has appeared as a servant (R2D2), a comrade (Lt. Commander Data), an extension to human abilities (Ghost in the Shell), a conqueror (The Matrix), a dictator (With Folded Hands) and an exterminator (Terminator, Battlestar Galactica). Some realistic potential consequences of AI are decreased labor demand,[25] the enhancement of human ability or experience,[26] and a need for redefinition of human identity and basic values.[27]
Futurists estimate the capabilities of machines using Moore's Law, which measures the relentless exponential improvement in digital technology with uncanny accuracy. Ray Kurzweil has calculated that desktop computers will have the same processing power as human brains by the year 2029, and that by 2040 artificial intelligence will reach a point where it is able to improve itself at a rate that far exceeds anything conceivable in the past, a scenario that science fiction writer Vernor Vinge named the "technological singularity".[28]
"Artificial intelligence is the next stage in evolution," Edward Fredkin said in the 1980s,[29] expressing an idea first proposed by Samuel Butler's Darwin Among the Machines (1863), and expanded upon by George Dyson (science historian) in his book of the same name (1998). Several futurists and science fiction writers have predicted that human beings and machines will merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in Aldous Huxley and Robert Ettinger, is now associated with robot designer Hans Moravec, cyberneticist Kevin Warwick and Ray Kurzweil.[28] Transhumanism has been illustrated in fiction as well, for example on the manga Ghost in the Shell.
In the middle of the 20th century, a handful of scientists began a new approach to building intelligent machines, based on recent discoveries in neurology, a new mathematical theory of information, an understanding of control and stability called cybernetics, and above all, by the invention of the digital computer, a machine based on the abstract essence of mathematical reasoning.[30]
The field of modern AI research was founded at conference on the campus of Dartmouth College in the summer of 1956.[31] Those who attended would become the leaders of AI research for many decades, especially John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon, who founded AI laboratories at MIT, CMU and Stanford. They and their students wrote programs that were, to most people, simply astonishing:[32] computers were solving word problems in algebra, proving logical theorems and speaking English.[33] By the middle 60s their research was heavily funded by the U.S. Department of Defense[34] and they were optimistic about the future of the new field:
These predictions, and many like them, would not come true. They had failed to recognize the difficulty of some of the problems they faced.[37] In 1974, in response to the criticism of England's Sir James Lighthill and ongoing pressure from Congress to fund more productive projects, DARPA cut off all undirected, exploratory research in AI. This was the first AI Winter.[38]
In the early 80s, AI research was revived by the commercial success of expert systems; applying the knowledge and analytical skills of one or more human experts. By 1985 the market for AI had reached more than a billion dollars.[39] Minsky and others warned the community that enthusiasm for AI had spiraled out of control and that disappointment was sure to follow.[40] Beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, more lasting AI Winter began.[41]
In the 90s and early 21st century AI achieved its greatest successes, albeit somewhat behind the scenes. Artificial intelligence was adopted throughout the technology industry, providing the heavy lifting for logistics, data mining, medical diagnosis and many other areas.[42] The success was due to several factors: the incredible power of computers today (see Moore's law), a greater emphasis on solving specific subproblems, the creation of new ties between AI and other fields working on similar problems, and above all a new commitment by researchers to solid mathematical methods and rigorous scientific standards.[43]
The philosophy of artificial intelligence considers the question "Can machines think?" Alan Turing, in his classic 1950 paper, Computing Machinery and Intelligence, was the first to try to answer it. In the years since, several answers have been given:[44]
While there is no universally accepted definition of intelligence,[52] AI researchers have studied several traits that are considered essential.[6]
Early AI researchers developed algorithms that imitated the process of conscious, step-by-step reasoning that human beings use when they solve puzzles, play board games, or make logical deductions.[53] By the late 80s and 90s, AI research had also developed highly successful methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[54]
For difficult problems, most of these algorithms can require enormous computational resources — most experience a "combinatorial explosion": the amount of memory or computer time required becomes astronomical when the problem goes beyond a certain size. The search for more efficient problem solving algorithms is a high priority for AI research.[55]
It is not clear, however, that conscious human reasoning is any more efficient when faced with a difficult abstract problem. Cognitive scientists have demonstrated that human beings solve most of their problems using unconscious reasoning, rather than the conscious, step-by-step deduction that early AI research was able to model.[56] Embodied cognitive science argues that unconscious sensorimotor skills are essential to our problem solving abilities. It is hoped that sub-symbolic methods, like computational intelligence and situated AI, will be able to model these instinctive skills. The problem of unconscious problem solving, which forms part of our commonsense reasoning, is largely unsolved.
Knowledge representation[57] and knowledge engineering[58] are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects;[59] situations, events, states and time;[60] causes and effects;[61] knowledge about knowledge (what we know about what other people know);[62] and many other, less well researched domains. A complete representation of "what exists" is an ontology[63] (borrowing a word from traditional philosophy), of which the most general are called upper ontologies.
Among the most difficult problems in knowledge representation are:
Intelligent agents must be able to set goals and achieve them.[67] They need a way to visualize the future: they must have a representation of the state of the world and be able to make predictions about how their actions will change it. They must also attempt to determine the utility or "value" of the choices available to it.[68]
In some planning problems, the agent can assume that it is the only thing acting on the world and it can be certain what the consequences of it's actions may be.[69] However, if this is not true, it must periodically check if the world matches its predictions and it must change its plan as this becomes necessary, requiring the agent to reason under uncertainty.[70]
Multi-agent planning tries to determine the best plan for a community of agents, using cooperation and competition to achieve a given goal. Emergent behavior such as this is used by both evolutionary algorithms and swarm intelligence.[71]
Important machine learning[72] problems are:
Natural language processing[74] gives machines the ability to read and understand the languages human beings speak. Many researchers hope that a sufficiently powerful natural language processing system would be able to acquire knowledge on its own, by reading the existing text available over the internet. Some straightforward applications of natural language processing include information retrieval (or text mining) and machine translation.[75]
The field of robotics[76] is closely related to AI. Intelligence is required for robots to be able to handle such tasks as object manipulation[77] and navigation, with sub-problems of localization (knowing where you are), mapping (learning what is around you) and motion planning (figuring out how to get there).[78]
Machine perception[79] is the ability to use input from sensors (such as cameras, microphones, sonar and others more exotic) to deduce aspects of the world. Computer vision[80] is the ability to analyze visual input. A few selected subproblems are speech recognition,[81] facial recognition and object recognition.[82]
Emotion and social skills play two roles for an intelligent agent:[83]
Most researchers hope that their work will eventually be incorporated into a machine with general intelligence (known as strong AI), combining all the skills above and exceeding human abilities at most or all of them.[7] A few believe that anthropomorphic features like artificial consciousness or an artificial brain may be required for such a project.
Many of the problems above are considered AI-complete: to solve one problem, you must solve them all. For example, even a straightforward, specific task like machine translation requires that the machine follow the author's argument (reason), know what it's talking about (knowledge), and faithfully reproduce the author's intention (social intelligence). Machine translation, therefore, is believed to be AI-complete: it may require strong AI to be done as well as humans can do it.[84]
There are as many approaches to AI as there are AI researchers—any coarse categorization is likely to be unfair to someone. Artificial intelligence communities have grown up around particular problems, institutions and researchers, as well as the theoretical insights that define the approaches described below. Artificial intelligence is a young science and is still a fragmented collection of subfields. At present, there is no established unifying theory that links the subfields into a coherent whole.
In the 40s and 50s, a number of researchers explored the connection between neurology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter's turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton and the Ratio Club in England.[85]
When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: CMU, Stanford and MIT, and each one developed its own style of research. John Haugeland named these approaches to AI "good old fashioned AI" or "GOFAI".[86]
During the 1960s, symbolic approaches had achieved great success at simulating high-level thinking in small demonstration programs. Approaches based on cybernetics or neural networks were abandoned or pushed into the background.[93] By the 1980s, however, progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into "sub-symbolic" approaches to specific AI problems.[94]
The "intelligent agent" paradigm became widely accepted during the 1990s.[99][100] Although earlier researchers had proposed modular "divide and conquer" approaches to AI,[101] the intelligent agent did not reach its modern form until Judea Pearl, Alan Newell and others brought concepts from decision theory and economics into the study of AI.[102] When the economist's definition of a rational agent was married to computer science's definition of an object or module, the intelligent agent paradigm was complete.
An intelligent agent is a system that perceives its environment and takes actions which maximizes its chances of success. The simplest intelligent agents are programs that solve specific problems. The most complicated intelligent agents would be rational, thinking human beings.[100]
The paradigm gives researchers license to study isolated problems and find solutions that are both verifiable and useful, without agreeing on one single approach. An agent that solves a specific problem can use any approach that works — some agents are symbolic and logical, some are sub-symbolic neural networks and some can be based on new approaches (without forcing researchers to reject old approaches that have proven useful). The paradigm gives researchers a common language to describe problems and share their solutions with each other and with other fields—such as decision theory—that also use concepts of abstract agents.
An agent architecture or cognitive architecture allows researchers to build more versatile and intelligent systems out of interacting intelligent agents in a multi-agent system.[103] A system with both symbolic and sub-symbolic components is a hybrid intelligent system, and the study of such systems is artificial intelligence systems integration. A hierarchical control system provides a bridge between sub-symbolic AI at its lowest, reactive levels and traditional symbolic AI at its highest levels, where relaxed time constraints permit planning and world modelling.[104] Rodney Brooks' subsumption architecture was an early proposal for such a hierarchical system.
In the course of 50 years of research, AI has developed a large number of tools to solve the most difficult problems in computer science. A few of the most general of these methods are discussed below.
Many problems in AI can be solved in theory by intelligently searching through many possible solutions:[105] Reasoning can be reduced to performing a search. For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule.[106] Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal.[107] Robotics algorithms for moving limbs and grasping objects use local searches in configuration space.[77] Even some learning algorithms have at their core a search engine.[108]
There are several types of search algorithms:
Logic[113] was introduced into AI research by John McCarthy in his 1958 Advice Taker proposal. The most important technical development was J. Alan Robinson's discovery of the resolution and unification algorithm for logical deduction in 1963. This procedure is simple, complete and entirely algorithmic, and can easily be performed by digital computers.[114] However, a naive implementation of the algorithm quickly leads to a combinatorial explosion or an infinite loop. In 1974, Robert Kowalski suggested representing logical expressions as Horn clauses (statements in the form of rules: "if p then q"), which reduced logical deduction to backward chaining or forward chaining. This greatly alleviated (but did not eliminate) the problem.[106][115]
Logic is used for knowledge representation and problem solving, but it can be applied to other problems as well. For example, the satplan algorithm uses logic for planning,[116] and inductive logic programming is a method for learning.[117]
There are several different forms of logic used in AI research.
Many problems in AI (in reasoning, planning, learning, perception and robotics) require the agent to operate with incomplete or uncertain information. Starting in the late 80s and early 90s, Judea Pearl and others championed the use of methods drawn from probability theory and economics to devise a number of powerful tools to solve these problems.[121]
Bayesian networks[122] are very general tool that can be used for a large number of problems: reasoning (using the Bayesian inference algorithm),[123] learning (using the expectation-maximization algorithm),[124] planning (using decision networks)[125] and perception (using dynamic Bayesian networks).[126]
Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time[127] (e.g., hidden Markov models[128] and Kalman filters[129]).
Planning problems have also taken advantages of other tools from economics, such as decision theory and decision analysis,[130] information value theory,[68] Markov decision processes,[131] dynamic decision networks,[131] game theory and mechanism design[132]
The simplest AI applications can be divided into two types: classifiers ("if shiny then diamond") and controllers ("if shiny then pick up"). Controllers do however also classify conditions before inferring actions, and therefore classification forms a central part of many AI systems.
Classifiers[133] are functions that use pattern matching to determine a closest match. They can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set.
When a new observation is received, that observation is classified based on previous experience. A classifier can be trained in various ways; there are many statistical and machine learning approaches.
A wide range of classifiers are available, each with its strengths and weaknesses. Classifier performance depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems; this is also referred to as the "no free lunch" theorem. Various empirical tests have been performed to compare classifier performance and to find the characteristics of data that determine classifier performance. Determining a suitable classifier for a given problem is however still more an art than science.
The most widely used classifiers are the neural network,[134] kernel methods such as the support vector machine,[135] k-nearest neighbor algorithm,[136] Gaussian mixture model,[137] naive Bayes classifier,[138] and decision tree.[108] The performance of these classifiers have been compared over a wide range of classification tasks[139] in order to find data characteristics that determine classifier performance.
The study of neural networks[134] began with cybernetics researchers, working in the decade before the field AI research was founded. In the 1960s Frank Rosenblatt developed an important early version, the perceptron.[140] Paul Werbos discovered the backpropagation algorithm in 1974,[141] which led to a renaissance in neural network research and connectionism in general in the middle 1980s. The Hopfield net, a form of attractor network, was first described by John Hopfield in 1982.
Neural networks are applied to the problem of learning, using such techniques as Hebbian learning[142] and the relatively new field of Hierarchical Temporal Memory which simulates the architecture of the neocortex.[143]
Several algorithms for learning use tools from evolutionary computation, such as genetic algorithms[144] and swarm intelligence.[145]
Control theory, the grandchild of cybernetics, has many important applications, especially in robotics.[146]
AI researchers have developed several specialized languages for AI research:
AI applications are also often written in standard languages like C++ and languages designed for mathematics, such as Matlab and Lush.
How can one determine if an agent is intelligent? In 1950, Alan Turing proposed a general procedure to test the intelligence of an agent now known as the Turing test. This procedure allows almost all the major problems of artificial intelligence to be tested. However, it is a very difficult challenge and at present all agents fail.
Artificial intelligence can also be evaluated on specific problems such as small problems in chemistry, hand-writing recognition and game-playing. Such tests have been termed subject matter expert Turing tests. Smaller problems provide more achievable goals and there are an ever-increasing number of positive results.
The broad classes of outcome for an AI test are:
For example, performance at checkers is optimal[151], performance at chess is super-human and nearing strong super-human[152], performance at Go is sub-human[153], and performance at many everyday tasks performed by humans is sub-human.
There are a number of competitions and prizes to promote research in artificial intelligence. The main areas promoted are: general machine intelligence, conversational behaviour, data-mining, driverless cars, robot soccer and games.
Artificial intelligence has successfully been used in a wide range of fields including medical diagnosis, stock trading, robot control, law, scientific discovery and toys. Frequently, when a technique reaches mainstream use it is no longer considered artificial intelligence, sometimes described as the AI effect.[154]
The external links in this article may not comply with Wikipedia's content policies or guidelines. Please improve this article by removing excessive or inappropriate external links. |
Find more about Artificial Intelligence on Wikipedia's sister projects: | |
---|---|
Dictionary definitions | |
Textbooks | |
Quotations | |
Source texts | |
Images and media | |
News stories | |
Learning resources |
Contents[hide] |
The term intelligence amplification (IA) has enjoyed a wide currency since William Ross Ashby wrote of "amplifying intelligence" in his Introduction to Cybernetics (1956) and related ideas were explicitly proposed as an alternative to Artificial Intelligence by Hao Wang from the early days of automatic theorem provers.
.."problem solving" is largely, perhaps entirely, a matter of appropriate selection. Take, for instance, any popular book of problems and puzzles. Almost every one can be reduced to the form: out of a certain set, indicate one element. ... It is, in fact, difficult to think of a problem, either playful or serious, that does not ultimately require an appropriate selection as necessary and sufficient for its solution. It is also clear that many of the tests used for measuring "intelligence" are scored essentially according to the candidate's power of appropriate selection. ... Thus it is not impossible that what is commonly referred to as "intellectual power" may be equivalent to "power of appropriate selection". Indeed, if a talking Black Box were to show high power of appropriate selection in such matters — so that, when given difficult problems it persistently gave correct answers — we could hardly deny that it was showing the 'behavioral' equivalent of "high intelligence". If this is so, and as we know that power of selection can be amplified, it seems to follow that intellectual power, like physical power, can be amplified. Let no one say that it cannot be done, for the gene-patterns do it every time they form a brain that grows up to be something better than the gene-pattern could have specified in detail. What is new is that we can now do it synthetically, consciously, deliberately.
- Ashby, W.R., An Introduction to Cybernetics, Chapman and Hall, London, UK, 1956. Reprinted, Methuen and Company, London, UK, 1964. PDF
"Man-Computer Symbiosis" is a key speculative paper published in 1960 by psychologist/computer scientist J.C.R. Licklider, which envisions that mutually-interdependent, "living together", tightly-coupled human brains and computing machines would prove to complement each other's strengths to a high degree:
"Man-computer symbiosis is a subclass of man-machine systems. There are many man-machine systems. At present, however, there are no man-computer symbioses. The purposes of this paper are to present the concept and, hopefully, to foster the development of man-computer symbiosis by analyzing some problems of interaction between men and computing machines, calling attention to applicable principles of man-machine engineering, and pointing out a few questions to which research answers are needed. The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today."
- Licklider, J.C.R., "Man-Computer Symbiosis", IRE Transactions on Human Factors in Electronics, vol. HFE-1, 4-11, Mar 1960. Eprint
In Licklider's vision, many of the pure artificial intelligence systems envisioned at the time by over-optimistic researchers would prove unnecessary. (This paper is also seen by some historians as marking the genesis of ideas about computer networks which later blossomed into the Internet).
Licklider's research was similar in spirit to his DARPA contemporary and protégé Douglas Engelbart; both had a view of how computers could be used that was both at odds with the then-prevalent views (which saw them as devices principally useful for computations), and key proponents of the way in which computers are now used (as generic adjuncts to humans).
Engelbart reasoned that the state of our current technology controls our ability to manipulate information, and that fact in turn will control our ability to develop new, improved technologies. He thus set himself to the revolutionary task of developing computer-based technologies for manipulating information directly, and also to improve individual and group processes for knowledge-work. Engelbart's philosophy and research agenda is most clearly and directly expressed in the 1962 research report which Engelbart refers to as his 'bible': Augmenting Human Intellect: A Conceptual Framework. The concept of network augmented intelligence is attributed to Engelbart based on this pioneering work.
"Increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems. Increased capability in this respect is taken to mean a mixture of the following: more-rapid comprehension, better comprehension, the possibility of gaining a useful degree of comprehension in a situation that previously was too complex, speedier solutions, better solutions, and the possibility of finding solutions to problems that before seemed insolvable. And by complex situations we include the professional problems of diplomats, executives, social scientists, life scientists, physical scientists, attorneys, designers--whether the problem situation exists for twenty minutes or twenty years. We do not speak of isolated clever tricks that help in particular situations. We refer to a way of life in an integrated domain where hunches, cut-and-try, intangibles, and the human feel for a situation usefully co-exist with powerful concepts, streamlined terminology and notation, sophisticated methods, and high-powered electronic aids."
- Engelbart, D.C., "Augmenting Human Intellect: A Conceptual Framework", Summary Report AFOSR-3233, Stanford Research Institute, Menlo Park, CA, Oct 1962. Eprint
Biotechnology is technology based on biology, especially when used in agriculture, food science, and medicine. The United Nations Convention on Biological Diversity defines biotechnology as:[1]
""Biotechnology" means any technological application that uses biological systems, living organisms, or derivatives thereof, to make or modify products or processes for specific use."
Biotechnology is often used to refer to genetic engineering technology of the 21st century, however the term encompasses a wider range and history of procedures for modifying biological organisms according to the needs of humanity, going back to the initial modifications of native plants into improved food crops through artificial selection and hybridization. Bioengineering is the science upon which all Biotechnological applications are based. With the development of new approaches and modern techniques, traditional biotechnology industries are also acquiring new horizons enabling them to improve the quality of their products and increase the productivity of their systems.
Before 1971, the term, biotechnology, was primarily used in the food processing and agriculture industries. Since the 1970s, it began to be used by the Western scientific establishment to refer to laboratory-based techniques being developed in biological research, such as recombinant DNA or tissue culture-based processes, or horizontal gene transfer in living plants, using vectors such as the Agrobacterium bacteria to transfer DNA into a host organism. In fact, the term should be used in a much broader sense to describe the whole range of methods, both ancient and modern, used to manipulate organic materials to reach the demands of food production. So the term could be defined as, "The application of indigenous and/or scientific knowledge to the management of (parts of) microorganisms, or of cells and tissues of higher organisms, so that these supply goods and services of use to the food industry and its consumers.[2]
Biotechnology combines disciplines like genetics, molecular biology, biochemistry, embryology and cell biology, which are in turn linked to practical disciplines like chemical engineering, information technology, and robotics. Patho-biotechnology describes the exploitation of pathogens or pathogen derived compounds for beneficial effect.
The most practical use of biotechnology, which is still present today, is the cultivation of plants to produce food suitable to humans. Agriculture has been theorized to have become the dominant way of producing food since the Neolithic Revolution. The processes and methods of agriculture have been refined by other mechanical and biological sciences since its inception. Through early biotechnology farmers were able to select the best suited and highest-yield crops to produce enough food to support a growing population, including Ali. Other uses of biotechnology were required as crops and fields became increasingly large and difficult to maintain. Specific organisms and organism byproducts were used to fertilize, restore nitrogen, and control pests. Throughout the use of agriculture farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants--one of the first forms of biotechnology. Cultures such as those in Mesopotamia, Egypt, and Iran developed the process of brewing beer. It is still done by the same basic method of using malted grains (containing enzymes) to convert starch from grains into sugar and then adding specific yeasts to produce beer. In this process the carbohydrates in the grains were broken down into alcohols such as ethanol. Later other cultures produced the process of Lactic acid fermentation which allowed the fermentation and preservation of other forms of food. Fermentation was also used in this time period to produce leavened bread. Although the process of fermentation was not fully understood until Louis Pasteur’s work in 1857, it is still the first use of biotechnology to convert a food source into another form.
Combinations of plants and other organisms were used as medications in many early civilizations. Since as early as 200 BC, people began to use disabled or minute amounts of infectious agents to immunize themselves against infections. These and similar processes have been refined in modern medicine and have led to many developments such as antibiotics, vaccines, and other methods of fighting sickness.
In the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. In 1917, Chaim Weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using Clostridium acetobutylicum to produce acetone, which the United Kingdom desperately needed to manufacture explosives during World War I.[3]
The field of modern biotechnology is thought to have largely begun on June 16, 1980, when the United States Supreme Court ruled that a genetically-modified microorganism could be patented in the case of Diamond v. Chakrabarty.[4] Indian-born Ananda Chakrabarty, working for General Electric, had developed a bacterium (derived from the Pseudomonas genus) capable of breaking down crude oil, which he proposed to use in treating oil spills. A university in Florida is now studying ways to prevent tooth decay. They altered the bacteria in the tooth called Streptococcus mutans by stripping it down so it could not produce lactic acid.
Biotechnology has applications in four major industrial areas, including health care (medical), crop production and agriculture, non food (industrial) uses of crops and other products (e.g. biodegradable plastics, vegetable oil, biofuels), and environmental uses.
For example, one application of biotechnology is the directed use of organisms for the manufacture of organic products (examples include beer and milk products). Another example is using naturally present bacteria by the mining industry in bioleaching. Biotechnology is also used to recycle, treat waste, clean up sites contaminated by industrial activities (bioremediation), and also to produce biological weapons.
A series of derived terms have been coined to identify several branches of biotechnology, for example:
In medicine, modern biotechnology finds promising applications in such areas as
Pharmacogenomics is the study of how the genetic inheritance of an individual affects his/her body’s response to drugs. It is a coined word derived from the words “pharmacology” and “genomics”. It is hence the study of the relationship between pharmaceuticals and genetics. The vision of pharmacogenomics is to be able to design and produce drugs that are adapted to each person’s genetic makeup.[6]
Pharmacogenomics results in the following benefits:[6]
1. Development of tailor-made medicines. Using pharmacogenomics, pharmaceutical companies can create drugs based on the proteins, enzymes and RNA molecules that are associated with specific genes and diseases. These tailor-made drugs promise not only to maximize therapeutic effects but also to decrease damage to nearby healthy cells.
2. More accurate methods of determining appropriate drug dosages. Knowing a patient’s genetics will enable doctors to determine how well his/ her body can process and metabolize a medicine. This will maximize the value of the medicine and decrease the likelihood of overdose.
3. Improvements in the drug discovery and approval process. The discovery of potential therapies will be made easier using genome targets. Genes have been associated with numerous diseases and disorders. With modern biotechnology, these genes can be used as targets for the development of effective new therapies, which could significantly shorten the drug discovery process.
4. Better vaccines. Safer vaccines can be designed and produced by organisms transformed by means of genetic engineering. These vaccines will elicit the immune response without the attendant risks of infection. They will be inexpensive, stable, easy to store, and capable of being engineered to carry several strains of pathogen at once.
Most traditional pharmaceutical drugs are relatively simple molecules that have been found primarily through trial and error to treat the symptoms of a disease or illness. Biopharmaceuticals are large biological molecules known as proteins and these usually (but not always, as is the case with using insulin to treat type 1 diabetes mellitus) target the underlying mechanisms and pathways of a malady; it is a relatively young industry. They can deal with targets in humans that may not be accessible with traditional medicines. A patient typically is dosed with a small molecule via a tablet while a large molecule is typically injected.
Small molecules are manufactured by chemistry but large molecules are created by living cells such as those found in the human body: for example, bacteria cells, yeast cells, animal or plant cells.
Modern biotechnology is often associated with the use of genetically altered microorganisms such as E. coli or yeast for the production of substances like synthetic insulin or antibiotics. It can also refer to transgenic animals or transgenic plants, such as Bt corn. Genetically altered mammalian cells, such as Chinese Hamster Ovary (CHO) cells, are also used to manufacture certain pharmaceuticals. Another promising new biotechnology application is the development of plant-made pharmaceuticals.
Biotechnology is also commonly associated with landmark breakthroughs in new medical therapies to treat hepatitis B, hepatitis C, cancers, arthritis, haemophilia, bone fractures, multiple sclerosis, and cardiovascular disorders. The biotechnology industry has also been instrumental in developing molecular diagnostic devices than can be used to define the target patient population for a given biopharmaceutical. Herceptin, for example, was the first drug approved for use with a matching diagnostic test and is used to treat breast cancer in women whose cancer cells express the protein HER2.
Modern biotechnology can be used to manufacture existing medicines relatively easily and cheaply. The first genetically engineered products were medicines designed to treat human diseases. To cite one example, in 1978 Genentech developed synthetic humanized insulin by joining its gene with a plasmid vector inserted into the bacterium Escherichia coli. Insulin, widely used for the treatment of diabetes, was previously extracted from the pancreas of cattle and/or pigs. The resulting genetically engineered bacterium enabled the production of vast quantities of synthetic human insulin at low cost.[7]
Since then modern biotechnology has made it possible to produce more easily and cheaply human growth hormone, clotting factors for hemophiliacs, fertility drugs, erythropoietin and other drugs.[8] Most drugs today are based on about 500 molecular targets. Genomic knowledge of the genes involved in diseases, disease pathways, and drug-response sites are expected to lead to the discovery of thousands more new targets.[8]
Genetic testing involves the direct examination of the DNA molecule itself. A scientist scans a patient’s DNA sample for mutated sequences.
There are two major types of gene tests. In the first type, a researcher may design short pieces of DNA (“probes”) whose sequences are complementary to the mutated sequences. These probes will seek their complement among the base pairs of an individual’s genome. If the mutated sequence is present in the patient’s genome, the probe will bind to it and flag the mutation. In the second type, a researcher may conduct the gene test by comparing the sequence of DNA bases in a patient’s gene to disease in healthy individuals or their progeny.
Genetic testing is now used for:
Some genetic tests are already available, although most of them are used in developed countries. The tests currently available can detect mutations associated with rare genetic disorders like cystic fibrosis, sickle cell anemia, and Huntington’s disease. Recently, tests have been developed to detect mutation for a handful of more complex conditions such as breast, ovarian, and colon cancers. However, gene tests may not detect every mutation associated with a particular condition because many are as yet undiscovered, and the ones they do detect may present different risks to different people and populations.[8]
Several issues have been raised regarding the use of genetic testing:
1. Absence of cure. There is still a lack of effective treatment or preventive measures for many diseases and conditions now being diagnosed or predicted using gene tests. Thus, revealing information about risk of a future disease that has no existing cure presents an ethical dilemma for medical practitioners.
2. Ownership and control of genetic information. Who will own and control genetic information, or information about genes, gene products, or inherited characteristics derived from an individual or a group of people like indigenous communities? At the macro level, there is a possibility of a genetic divide, with developing countries that do not have access to medical applications of biotechnology being deprived of benefits accruing from products derived from genes obtained from their own people. Moreover, genetic information can pose a risk for minority population groups as it can lead to group stigmatization.
At the individual level, the absence of privacy and anti-discrimination legal protections in most countries can lead to discrimination in employment or insurance or other misuse of personal genetic information. This raises questions such as whether genetic privacy is different from medical privacy.[9]
3. Reproductive issues. These include the use of genetic information in reproductive decision-making and the possibility of genetically altering reproductive cells that may be passed on to future generations. For example, germline therapy forever changes the genetic make-up of an individual’s descendants. Thus, any error in technology or judgment may have far-reaching consequences. Ethical issues like designer babies and human cloning have also given rise to controversies between and among scientists and bioethicists, especially in the light of past abuses with eugenics.
4. Clinical issues. These center on the capabilities and limitations of doctors and other health-service providers, people identified with genetic conditions, and the general public in dealing with genetic information.
5. Effects on social institutions. Genetic tests reveal information about individuals and their families. Thus, test results can affect the dynamics within social institutions, particularly the family.
6. Conceptual and philosophical implications regarding human responsibility, free will vis-à-vis genetic determinism, and the concepts of health and disease.
Gene therapy may be used for treating, or even curing, genetic and acquired diseases like cancer and AIDS by using normal genes to supplement or replace defective genes or to bolster a normal function such as immunity. It can be used to target somatic (i.e., body) or germ (i.e., egg and sperm) cells. In somatic gene therapy, the genome of the recipient is changed, but this change is not passed along to the next generation. In contrast, in germline gene therapy, the egg and sperm cells of the parents are changed for the purpose of passing on the changes to their offspring.
There are basically two ways of implementing a gene therapy treatment:
1. Ex vivo, which means “outside the body” – Cells from the patient’s blood or bone marrow are removed and grown in the laboratory. They are then exposed to a virus carrying the desired gene. The virus enters the cells, and the desired gene becomes part of the DNA of the cells. The cells are allowed to grow in the laboratory before being returned to the patient by injection into a vein.
2. In vivo, which means “inside the body” – No cells are removed from the patient’s body. Instead, vectors are used to deliver the desired gene to cells in the patient’s body.
Currently, the use of gene therapy is limited. Somatic gene therapy is primarily at the experimental stage. Germline therapy is the subject of much discussion but it is not being actively investigated in larger animals and human beings.
As of June 2001, more than 500 clinical gene-therapy trials involving about 3,500 patients have been identified worldwide. Around 78% of these are in the United States, with Europe having 18%. These trials focus on various types of cancer, although other multigenic diseases are being studied as well. Recently, two children born with severe combined immunodeficiency disorder (“SCID”) were reported to have been cured after being given genetically engineered cells.
Gene therapy faces many obstacles before it can become a practical approach for treating disease.[10] At least four of these obstacles are as follows:
1. Gene delivery tools. Genes are inserted into the body using gene carriers called vectors. The most common vectors now are viruses, which have evolved a way of encapsulating and delivering their genes to human cells in a pathogenic manner. Scientists manipulate the genome of the virus by removing the disease-causing genes and inserting the therapeutic genes. However, while viruses are effective, they can introduce problems like toxicity, immune and inflammatory responses, and gene control and targeting issues.
2. Limited knowledge of the functions of genes. Scientists currently know the functions of only a few genes. Hence, gene therapy can address only some genes that cause a particular disease. Worse, it is not known exactly whether genes have more than one function, which creates uncertainty as to whether replacing such genes is indeed desirable.
3. Multigene disorders and effect of environment. Most genetic disorders involve more than one gene. Moreover, most diseases involve the interaction of several genes and the environment. For example, many people with cancer not only inherit the disease gene for the disorder, but may have also failed to inherit specific tumor suppressor genes. Diet, exercise, smoking and other environmental factors may have also contributed to their disease.
4. High costs. Since gene therapy is relatively new and at an experimental stage, it is an expensive treatment to undertake. This explains why current studies are focused on illnesses commonly found in developed countries, where more people can afford to pay for treatment. It may take decades before developing countries can take advantage of this technology.
The Human Genome Project is an initiative of the U.S. Department of Energy (“DOE”) that aims to generate a high-quality reference sequence for the entire human genome and identify all the human genes.
The DOE and its predecessor agencies were assigned by the U.S. Congress to develop new energy resources and technologies and to pursue a deeper understanding of potential health and environmental risks posed by their production and use. In 1986, the DOE announced its Human Genome Initiative. Shortly thereafter, the DOE and National Institutes of Health developed a plan for a joint Human Genome Project (“HGP”), which officially began in 1990.
The HGP was originally planned to last 15 years. However, rapid technological advances and worldwide participation accelerated the completion date to 2003 (making it a 13 year project). Already it has enabled gene hunters to pinpoint genes associated with more than 30 disorders.[11]
Cloning involves the removal of the nucleus from one cell and its placement in an unfertilized egg cell whose nucleus has either been deactivated or removed.
There are two types of cloning:
1. Reproductive cloning. After a few divisions, the egg cell is placed into a uterus where it is allowed to develop into a fetus that is genetically identical to the donor of the original nucleus.
2. Therapeutic cloning.[12] The egg is placed into a Petri dish where it develops into embryonic stem cells, which have shown potentials for treating several ailments.[13]
In February 1997, cloning became the focus of media attention when Ian Wilmut and his colleagues at the Roslin Institute announced the successful cloning of a sheep, named Dolly, from the mammary glands of an adult female. The cloning of Dolly made it apparent to many that the techniques used to produce her could someday be used to clone human beings.[14] This stirred a lot of controversy because of its ethical implications.
In January 2008, Christopher S. Chen made an exciting discovery that could potentially alter the future of medicine. He found that cell signaling that is normally biochemically regulated could be simulated with magnetic nanoparticles attached to a cell surface. The discovery of Donald Ingber, Robert Mannix, and Sanjay Kumar, who found that a nanobead can be attached to a monovalent ligand, and that these compounds can bind to Mast cells without triggering the clustering response, inspired Chen’s research. Usually, when a multivalent ligand attaches to the cell’s receptors, the signal pathway is activated. However, these nanobeads only initiated cell signaling when a magnetic field was applied to the area, thereby causing the nanobeads to cluster. It is important to note that this clustering triggered the cellular response, not merely the force applied to the cell due to the receptor binding. This experiment was carried out several times with time-varying activation cycles. However, there is no reason to suggest that the response time could not be reduced to seconds or even milliseconds. This low response time has exciting applications in the medical field. Currently it takes minutes or hours for a pharmaceutical to affect its environment, and when it does so, the changes are irreversible. With the current research in mind, though, a future of millisecond response times and reversible effects is possible. Imagine being able to treat various allergic responses, colds, and other such ailments almost instantaneously. This future has not yet arrived, however, and further research and testing must be done in this area, but this is an important step in the right direction.[15]
Using the techniques of modern biotechnology, one or two genes may be transferred to a highly developed crop variety to impart a new character that would increase its yield (30). However, while increases in crop yield are the most obvious applications of modern biotechnology in agriculture, it is also the most difficult one. Current genetic engineering techniques work best for effects that are controlled by a single gene. Many of the genetic characteristics associated with yield (e.g., enhanced growth) are controlled by a large number of genes, each of which has a minimal effect on the overall yield (31). There is, therefore, much scientific work to be done in this area.
Crops containing genes that will enable them to withstand biotic and abiotic stresses may be developed. For example, drought and excessively salty soil are two important limiting factors in crop productivity. Biotechnologists are studying plants that can cope with these extreme conditions in the hope of finding the genes that enable them to do so and eventually transferring these genes to the more desirable crops. One of the latest developments is the identification of a plant gene, At-DBF2, from thale cress, a tiny weed that is often used for plant research because it is very easy to grow and its genetic code is well mapped out. When this gene was inserted into tomato and tobacco cells, the cells were able to withstand environmental stresses like salt, drought, cold and heat, far more than ordinary cells. If these preliminary results prove successful in larger trials, then At-DBF2 genes can help in engineering crops that can better withstand harsh environments (32). Researchers have also created transgenic rice plants that are resistant to rice yellow mottle virus (RYMV). In Africa, this virus destroys majority of the rice crops and makes the surviving plants more susceptible to fungal infections (33). BIOTECHNOLOGY
Proteins in foods may be modified to increase their nutritional qualities. Proteins in legumes and cereals may be transformed to provide the amino acids needed by human beings for a balanced diet (34). A good example is the work of Professors Ingo Potrykus and Peter Beyer on the so-called Goldenrice™(discussed below).
Modern biotechnology can be used to slow down the process of spoilage so that fruit can ripen longer on the plant and then be transported to the consumer with a still reasonable shelf life. This improves the taste, texture and appearance of the fruit. More importantly, it could expand the market for farmers in developing countries due to the reduction in spoilage.
The first genetically modified food product was a tomato which was transformed to delay its ripening (35). Researchers in Indonesia, Malaysia, Thailand, Philippines and Vietnam are currently working on delayed-ripening papaya in collaboration with the University of Nottingham and Zeneca (36).
Biotechnology in cheeze production[16]: enzymes produced by micro-organisms provide an alternative to animal rennet – a cheese coagulant - and a more reliable supply for cheese makers. This also eliminates possible public concerns with animal derived material. Enzymes offer an animal friendly alternative to animal rennet. While providing constant quality, they are also less expensive.
About 85 million tons of wheat flour is used every year to bake bread[17]. By adding an enzyme called maltogenic amylase to the flour, bread stays fresher longer. Assuming that 10-15% of bread is thrown away, if it could just stay fresh another 5-7 days then 2 million tons of flour per year would be saved. That corresponds to 40% of the bread consumed in a country such as the USA. This means more bread becomes available with no increase in input. In combination with other enzymes, bread can also be made bigger, more appetizing and better in a range of ways.
Most of the current commercial applications of modern biotechnology in agriculture are on reducing the dependence of farmers on agrochemicals. For example, Bacillus thuringiensis (Bt) is a soil bacterium that produces a protein with insecticidal qualities. Traditionally, a fermentation process has been used to produce an insecticidal spray from these bacteria. In this form, the Bt toxin occurs as an inactive protoxin, which requires digestion by an insect to be effective. There are several Bt toxins and each one is specific to certain target insects. Crop plants have now been engineered to contain and express the genes for Bt toxin, which they produce in its active form. When a susceptible insect ingests the transgenic crop cultivar expressing the Bt protein, it stops feeding and soon thereafter dies as a result of the Bt toxin binding to its gut wall. Bt corn is now commercially available in a number of countries to control corn borer (a lepidopteran insect), which is otherwise controlled by spraying (a more difficult process).
Crops have also been genetically engineered to acquire tolerance to broad-spectrum herbicide. The lack of cost-effective herbicides with broad-spectrum activity and no crop injury was a consistent limitation in crop weed management. Multiple applications of numerous herbicides were routinely used to control a wide range of weed species detrimental to agronomic crops. Weed management tended to rely on preemergence — that is, herbicide applications were sprayed in response to expected weed infestations rather than in response to actual weeds present. Mechanical cultivation and hand weeding were often necessary to control weeds not controlled by herbicide applications. The introduction of herbicide tolerant crops has the potential of reducing the number of herbicide active ingredients used for weed management, reducing the number of herbicide applications made during a season, and increasing yield due to improved weed management and less crop injury. Transgenic crops that express tolerance to glyphosphate, glufosinate and bromoxynil have been developed. These herbicides can now be sprayed on transgenic crops without inflicting damage on the crops while killing nearby weeds (37).
From 1996 to 2001, herbicide tolerance was the most dominant trait introduced to commercially available transgenic crops, followed by insect resistance. In 2001, herbicide tolerance deployed in soybean, corn and cotton accounted for 77% of the 626,000 square kilometres planted to transgenic crops; Bt crops accounted for 15%; and "stacked genes" for herbicide tolerance and insect resistance used in both cotton and corn accounted for 8% (38).
Biotechnology is being applied for novel uses other than food. For example, oilseed can be modified to produce fatty acids for detergents, substitute fuels and petrochemicals.[citation needed] Potato, tomato, rice, and other plants have been genetically engineered to produce insulin[citation needed] and certain vaccines. If future clinical trials prove successful, the advantages of edible vaccines would be enormous, especially for developing countries. The transgenic plants may be grown locally and cheaply. Homegrown vaccines would also avoid logistical and economic problems posed by having to transport traditional preparations over long distances and keeping them cold while in transit. And since they are edible, they will not need syringes, which are not only an additional expense in the traditional vaccine preparations but also a source of infections if contaminated.[18] In the case of insulin grown in transgenic plants, it might not be administered as an edible protein, but it could be produced at significantly lower cost than insulin produced in costly, bioreactors.[citation needed]
There is another side to the agricultural biotechnology issue however. It includes increased herbicide usage and resultant herbicide resistance, "super weeds," residues on and in food crops, genetic contamination of non-GM crops which hurt organic and conventional farmers, damage to wildlife from glyphosate, etc.[2][3]
Biotechnological engineering or biological engineering is a branch of engineering that focuses on biotechnologies and biological science. It includes different disciplines such as biochemical engineering, biomedical engineering, bio-process engineering, biosystem engineering and so on. Because of the novelty of the field, the definition of a bioengineer is still undefined. However, in general it is an integrated approach of fundamental biological sciences and traditional engineering principles.
Bioengineers are often employed to scale up bio processes from the laboratory scale to the manufacturing scale. Moreover, as with most engineers, they often deal with management, economic and legal issues. Since patents and regulation (e.g. FDA regulation in the U.S.) are very important issues for biotech enterprises, bioengineers are often required to have knowledge related to these issues.
The increasing number of biotech enterprises is likely to create a need for bioengineers in the years to come. Many universities throughout the world are now providing programs in bioengineering and biotechnology (as independent programs or specialty programs within more established engineering fields).
Biotechnology is being used to engineer and adapt organisms especially microorganisms in an effort to find sustainable ways to clean up contaminated environments. The elimination of a wide range of pollutants and wastes from the environment is an absolute requirement to promote a sustainable development of our society with low environmental impact. Biological processes play a major role in the removal of contaminants and biotechnology is taking advantage of the astonishing catabolic versatility of microorganisms to degrade/convert such compounds. New methodological breakthroughs in sequencing, genomics, proteomics, bioinformatics and imaging are producing vast amounts of information. In the field of Environmental Microbiology, genome-based global studies open a new era providing unprecedented in silico views of metabolic and regulatory networks, as well as clues to the evolution of degradation pathways and to the molecular adaptation strategies to changing environmental conditions. Functional genomic and metagenomic approaches are increasing our understanding of the relative importance of different pathways and regulatory networks to carbon flux in particular environments and for particular compounds and they will certainly accelerate the development of bioremediation technologies and biotransformation processes.[19]
Marine environments are especially vulnerable since oil spills of coastal regions and the open sea are poorly containable and mitigation is difficult. In addition to pollution through human activities, millions of tons of petroleum enter the marine environment every year from natural seepages. Despite its toxicity, a considerable fraction of petroleum oil entering marine systems is eliminated by the hydrocarbon-degrading activities of microbial communities, in particular by a remarkable recently discovered group of specialists, the so-called hydrocarbonoclastic bacteria (HCB).[20]
There are various TV series, films, and documentaries with biotechnological themes; Surface, X-Files, The Island, I Am Legend, Torchwood, Horizon. Most of which convey the endless possiblities of how the technology can go wrong, and the consequences of this.
The majority of newspapers also show pessimistic viewpoints to stem cell research, genetic engineering and the like. Some[attribution needed] would describe the Medias' overarching reaction to biotechnology as simple misunderstanding and fright.[citation needed] While there are legitimate concerns of the overwhelming power this technology may bring, most condemnations of the technology are a result of religious beliefs.[citation needed]
Nanotechnology refers broadly to a field of applied science and technology whose unifying theme is the control of matter on the atomic and molecular scale, normally 1 to 100 nanometers, and the fabrication of devices with critical dimensions that lie within that size range.
Contents[hide] |
It is a highly multidisciplinary field, drawing from fields such as applied physics, materials science, interface and colloid science, device physics, supramolecular chemistry (which refers to the area of chemistry that focuses on the noncovalent bonding interactions of molecules), self-replicating machines and robotics, chemical engineering, mechanical engineering, biological engineering, and electrical engineering. Much speculation exists as to what may result from these lines of research. Nanotechnology can be seen as an extension of existing sciences into the nanoscale, or as a recasting of existing sciences using a newer, more modern term.
Two main approaches are used in nanotechnology. In the "bottom-up" approach, materials and devices are built from molecular components which assemble themselves chemically by principles of molecular recognition. In the "top-down" approach, nano-objects are constructed from larger entities without atomic-level control. The impetus for nanotechnology comes from a renewed interest in Interface and Colloid Science, coupled with a new generation of analytical tools such as the atomic force microscope (AFM), and the scanning tunneling microscope (STM). Combined with refined processes such as electron beam lithography and molecular beam epitaxy, these instruments allow the deliberate manipulation of nanostructures, and led to the observation of novel phenomena.
Examples of nanotechnology in modern use are the manufacture of polymers based on molecular structure, and the design of computer chip layouts based on surface science. Despite the great promise of numerous nanotechnologies such as quantum dots and nanotubes, real commercial applications have mainly used the advantages of colloidal nanoparticles in bulk form, such as suntan lotion, cosmetics, protective coatings, drug delivery,[1] and stain resistant clothing.
The first use of the concepts in 'nano-technology' (but predating use of that name) was in "There's Plenty of Room at the Bottom," a talk given by physicist Richard Feynman at an American Physical Society meeting at Caltech on December 29, 1959. Feynman described a process by which the ability to manipulate individual atoms and molecules might be developed, using one set of precise tools to build and operate another proportionally smaller set, so on down to the needed scale. In the course of this, he noted, scaling issues would arise from the changing magnitude of various physical phenomena: gravity would become less important, surface tension and Van der Waals attraction would become more important, etc. This basic idea appears plausible, and exponential assembly enhances it with parallelism to produce a useful quantity of end products. The term "nanotechnology" was defined by Tokyo Science University Professor Norio Taniguchi in a 1974 paper (N. Taniguchi, "On the Basic Concept of 'Nano-Technology'," Proc. Intl. Conf. Prod. London, Part II, British Society of Precision Engineering, 1974.) as follows: "'Nano-technology' mainly consists of the processing of, separation, consolidation, and deformation of materials by one atom or by one molecule." In the 1980s the basic idea of this definition was explored in much more depth by Dr. K. Eric Drexler, who promoted the technological significance of nano-scale phenomena and devices through speeches and the books Engines of Creation: The Coming Era of Nanotechnology (1986) and Nanosystems: Molecular Machinery, Manufacturing, and Computation,[2] and so the term acquired its current sense. Nanotechnology and nanoscience got started in the early 1980s with two major developments; the birth of cluster science and the invention of the scanning tunneling microscope (STM). This development led to the discovery of fullerenes in 1986 and carbon nanotubes a few years later. In another development, the synthesis and properties of semiconductor nanocrystals was studied; This led to a fast increasing number of metal oxide nanoparticles of quantum dots. The atomic force microscope was invented six years after the STM was invented.
One nanometer (nm) is one billionth, or 10-9 of a meter. For comparison, typical carbon-carbon bond lengths, or the spacing between these atoms in a molecule, are in the range .12-.15 nm, and a DNA double-helix has a diameter around 2 nm. On the other hand, the smallest cellular lifeforms, the bacteria of the genus Mycoplasma, are around 200 nm in length. To put that scale in to context the comparative size of a nanometer to a meter is the same as that of a marble to the size of the earth.[3] Or another way of putting it: a nanometer is the amount a man's beard grows in the time it takes him to raise the razor to his face.[3]
A number of physical phenomena become noticeably pronounced as the size of the system decreases. These include statistical mechanical effects, as well as quantum mechanical effects, for example the “quantum size effect” where the electronic properties of solids are altered with great reductions in particle size. This effect does not come into play by going from macro to micro dimensions. However, it becomes dominant when the nanometer size range is reached. Additionally, a number of physical (mechanical, electrical, optical, etc.) properties change when compared to macroscopic systems. One example is the increase in surface area to volume ratio altering mechanical, thermal and catalytic properties of materials. Novel mechanical properties of nanosystems are of interest in the nanomechanics research. The catalytic activity of nanomaterials also opens potential risks in their interaction with biomaterials.
Materials reduced to the nanoscale can suddenly show very different properties compared to what they exhibit on a macroscale, enabling unique applications. For instance, opaque substances become transparent (copper); inert materials become catalysts (platinum); stable materials turn combustible (aluminum); solids turn into liquids at room temperature (gold); insulators become conductors (silicon). A material such as gold, which is chemically inert at normal scales, can serve as a potent chemical catalyst at nanoscales. Much of the fascination with nanotechnology stems from these unique quantum and surface phenomena that matter exhibits at the nanoscale.
Modern synthetic chemistry has reached the point where it is possible to prepare small molecules to almost any structure. These methods are used today to produce a wide variety of useful chemicals such as pharmaceuticals or commercial polymers. This ability raises the question of extending this kind of control to the next-larger level, seeking methods to assemble these single molecules into supramolecular assemblies consisting of many molecules arranged in a well defined manner.
These approaches utilize the concepts of molecular self-assembly and/or supramolecular chemistry to automatically arrange themselves into some useful conformation through a bottom-up approach. The concept of molecular recognition is especially important: molecules can be designed so that a specific conformation or arrangement is favored due to non-covalent intermolecular forces. The Watson-Crick basepairing rules are a direct result of this, as is the specificity of an enzyme being targeted to a single substrate, or the specific folding of the protein itself. Thus, two or more components can be designed to be complementary and mutually attractive so that they make a more complex and useful whole.
Such bottom-up approaches should, broadly speaking, be able to produce devices in parallel and much cheaper than top-down methods, but could potentially be overwhelmed as the size and complexity of the desired assembly increases. Most useful structures require complex and thermodynamically unlikely arrangements of atoms. Nevertheless, there are many examples of self-assembly based on molecular recognition in biology, most notably Watson-Crick basepairing and enzyme-substrate interactions. The challenge for nanotechnology is whether these principles can be used to engineer novel constructs in addition to natural ones.
Molecular nanotechnology, sometimes called molecular manufacturing, is a term given to the concept of engineered nanosystems (nanoscale machines) operating on the molecular scale. It is especially associated with the concept of a molecular assembler, a machine that can produce a desired structure or device atom-by-atom using the principles of mechanosynthesis. Manufacturing in the context of productive nanosystems is not related to, and should be clearly distinguished from, the conventional technologies used to manufacture nanomaterials such as carbon nanotubes and nanoparticles.
When the term "nanotechnology" was independently coined and popularized by Eric Drexler (who at the time was unaware of an earlier usage by Norio Taniguchi) it referred to a future manufacturing technology based on molecular machine systems. The premise was that molecular-scale biological analogies of traditional machine components demonstrated molecular machines were possible: by the countless examples found in biology, it is known that sophisticated, stochastically optimised biological machines can be produced.
It is hoped that developments in nanotechnology will make possible their construction by some other means, perhaps using biomimetic principles. However, Drexler and other researchers[4] have proposed that advanced nanotechnology, although perhaps initially implemented by biomimetic means, ultimately could be based on mechanical engineering principles, namely, a manufacturing technology based on the mechanical functionality of these components (such as gears, bearings, motors, and structural members) that would enable programmable, positional assembly to atomic specification (PNAS-1981). The physics and engineering performance of exemplar designs were analyzed in Drexler's book Nanosystems.
But Drexler's analysis is very qualitative and does not address very pressing issues, such as the "fat fingers" and "Sticky fingers" problems. In general it is very difficult to assemble devices on the atomic scale, as all one has to position atoms are other atoms of comparable size and stickyness. Another view, put forth by Carlo Montemagno],[5] is that future nanosystems will be hybrids of silicon technology and biological molecular machines. Yet another view, put forward by the late Richard Smalley, is that mechanosynthesis is impossible due to the difficulties in mechanically manipulating individual molecules.
This led to an exchange of letters in the ACS publication Chemical & Engineering News in 2003.[6] Though biology clearly demonstrates that molecular machine systems are possible, non-biological molecular machines are today only in their infancy. Leaders in research on non-biological molecular machines are Dr. Alex Zettl and his colleagues at Lawrence Berkeley Laboratories and UC Berkeley. They have constructed at least three distinct molecular devices whose motion is controlled from the desktop with changing voltage: a nanotube nanomotor, a molecular actuator, and a nanoelectromechanical relaxation oscillator.
An experiment indicating that positional molecular assembly is possible was performed by Ho and Lee at Cornell University in 1999. They used a scanning tunneling microscope to move an individual carbon monoxide molecule (CO) to an individual iron atom (Fe) sitting on a flat silver crystal, and chemically bound the CO to the Fe by applying a voltage.
This includes subfields which develop or study materials having unique properties arising from their nanoscale dimensions.[8]
These seek to arrange smaller components into more complex assemblies.
These seek to create smaller devices by using larger ones to direct their assembly.
These seek to develop components of a desired functionality without regard to how they might be assembled.
These subfields seek to anticipate what inventions nanotechnology might yield, or attempt to propose an agenda along which inquiry might progress. These often take a big-picture view of nanotechnology, with more emphasis on its societal implications than the details of how such inventions could actually be created.
The first observations and size measurements of nano-particles was made during the first decade of the 20th century. They are mostly associated with the name of Zsigmondy who made detail study of gold sols and other nanomaterials with sizes down to 10 nm and less. He published a book in 1914.[19] He used ultramicroscope that employes dark field method for seeing particles with sizes much less than light wavelength.
There are traditional techniques developed during 20th century in Interface and Colloid Science for characterizing nanomaterials. These are widely used for first generation passive nanomaterials specified in the next section.
These methods include several different techniques for characterizing particle size distribution. This characterization is imperative because many materials that are expected to be nano-sized are actually aggregated in solutions. Some of methods are based on light scattering. Other apply ultrasound, such as ultrasound attenuation spectroscopy for testing concentrated nano-dispersions and microemulsions.[20]
There is also a group of traditional techniques for characterizing surface charge or zeta potential of nano-particles in solutions. These information is required for proper system stabilzation, preventing its aggregation or flocculation. These methods include microelectrophoresis, electrophoretic light scattering and electroacoustics. The last one, for instance colloid vibration current method is suitable for characterizing concentrated systems.
Next group of nanotechnological techniques include those used for fabrication of nanowires, those used in semiconductor fabrication such as deep ultraviolet lithography, electron beam lithography, focused ion beam machining, nanoimprint lithography, atomic layer deposition, and molecular vapor deposition, and further including molecular self-assembly techniques such as those employing di-block copolymers. However, all of these techniques preceded the nanotech era, and are extensions in the development of scientific advancements rather than techniques which were devised with the sole purpose of creating nanotechnology and which were results of nanotechnology research.
There are several important modern developments. The atomic force microscope (AFM) and the Scanning Tunneling Microscope (STM) are two early versions of scanning probes that launched nanotechnology. There are other types of scanning probe microscopy, all flowing from the ideas of the scanning confocal microscope developed by Marvin Minsky in 1961 and the scanning acoustic microscope (SAM) developed by Calvin Quate and coworkers in the 1970s, that made it possible to see structures at the nanoscale. The tip of a scanning probe can also be used to manipulate nanostructures (a process called positional assembly). Feature-oriented scanning-positioning methodology suggested by Rostislav Lapshin appears to be a promising way to implement these nanomanipulations in automatic mode. However, this is still a slow process because of low scanning velocity of the microscope. Various techniques of nanolithography such as dip pen nanolithography, electron beam lithography or nanoimprint lithography were also developed. Lithography is a top-down fabrication technique where a bulk material is reduced in size to nanoscale pattern.
The top-down approach anticipates nanodevices that must be built piece by piece in stages, much as manufactured items are currently made. Scanning probe microscopy is an important technique both for characterization and synthesis of nanomaterials. Atomic force microscopes and scanning tunneling microscopes can be used to look at surfaces and to move atoms around. By designing different tips for these microscopes, they can be used for carving out structures on surfaces and to help guide self-assembling structures. By using, for example, feature-oriented scanning-positioning approach, atoms can be moved around on a surface with scanning probe microscopy techniques. At present, it is expensive and time-consuming for mass production but very suitable for laboratory experimentation.
In contrast, bottom-up techniques build or grow larger structures atom by atom or molecule by molecule. These techniques include chemical synthesis, self-assembly and positional assembly. Another variation of the bottom-up approach is molecular beam epitaxy or MBE. Researchers at Bell Telephone Laboratories like John R. Arthur. Alfred Y. Cho, and Art C. Gossard developed and implemented MBE as a research tool in the late 1960s and 1970s. Samples made by MBE were key to the discovery of the fractional quantum Hall effect for which the 1998 Nobel Prize in Physics was awarded. MBE allows scientists to lay down atomically-precise layers of atoms and, in the process, build up complex structures. Important for research on semiconductors, MBE is also widely used to make samples and devices for the newly emerging field of spintronics.
Newer techniques such as Dual Polarisation Interferometry are enabling scientists to measure quantitatively the molecular interactions that take place at the nano-scale.
The small size of nanoparticles endows them with properties that can be very useful in oncology, particularly in imaging. Quantum dots (nanoparticles with quantum confinement properties, such as size-tunable light emission), when used in conjunction with MRI (magnetic resonance imaging), can produce exceptional images of tumor sites. These nanoparticles are much brighter than organic dyes and only need one light source for excitation. This means that the use of fluorescent quantum dots could produce a higher contrast image and at a lower cost than today's organic dyes. Another nanoproperty, high surface area to volume ratio, allows many functional groups to be attached to a nanoparticle, which can seek out and bind to certain tumor cells. Additionally, the small size of nanoparticles (10 to 100 nanometers), allows them to preferentially accumulate at tumor sites (because tumors lack an effective lymphatic drainage system). A very exciting research question is how to make these imaging nanoparticles do more things for cancer. For instance, is it possible to manufacture multifunctional nanoparticles that would detect, image, and then proceed to treat a tumor? This question is currently under vigorous investigation; the answer to which could shape the future of cancer treatment.[21]
Although there has been much hype about the potential applications of nanotechnology, most current commercialized applications are limited to the use of "first generation" passive nanomaterials. These include titanium dioxide nanoparticles in sunscreen, cosmetics and some food products; silver nanoparticles in food packaging, clothing, disinfectants and household appliances; zinc oxide nanoparticles in sunscreens and cosmetics, surface coatings, paints and outdoor furniture varnishes; and cerium oxide nanoparticles as a fuel catalyst. The Woodrow Wilson Center for International Scholars' Project on Emerging Nanotechnologies hosts an online inventory of consumer products which now contain nanomaterials.[22]
However further applications which require actual manipulation or arrangement of nanoscale components await further research. Though technologies currently branded with the term 'nano' are sometimes little related to and fall far short of the most ambitious and transformative technological goals of the sort in molecular manufacturing proposals, the term still connotes such ideas. Thus there may be a danger that a "nano bubble" will form, or is forming already, from the use of the term by scientists and entrepreneurs to garner funding, regardless of interest in the transformative possibilities of more ambitious and far-sighted work.
The National Science Foundation (a major source of funding for nanotechnology in the United States) funded researcher David Berube to study the field of nanotechnology. His findings are published in the monograph “Nano-Hype: The Truth Behind the Nanotechnology Buzz". This published study (with a foreword by Mihail Roco, Senior Advisor for Nanotechnology at the National Science Foundation) concludes that much of what is sold as “nanotechnology” is in fact a recasting of straightforward materials science, which is leading to a “nanotech industry built solely on selling nanotubes, nanowires, and the like” which will “end up with a few suppliers selling low margin products in huge volumes."
Another large and beneficial outcome of nanotechnology is the production of potable water through the means of nanofiltration. Where much of the developing world lacks access to reliable water sources, nanotechnology may alleviate these issues upon further testing as have been performed in countries, such as South Africa. It is important that solute levels in water sources are maintained and reached to provide necessary nutrients to people. And in turn, further testing would be pertinent so as to measure for any signs of nanotoxicology and any negative affects to any and all biological creatures.[23]
In 1999, the ultimate CMOS transistor developed at the Laboratory for Economics and Information Technology in Grenoble, France, tested the limits of the principles of the MOSFET transistor with a diameter of 18 nm (approximately 70 atoms placed side by side). This was almost 10 times smaller than the smallest industrial transistor in 2003 (130 nm in 2003, 90 nm in 2004 and 65 nm in 2005). It enabled the theoretical integration of seven billion junctions on a €1 coin. However, the CMOS transistor, which was created in 1999, was not a simple research experiment to study how CMOS technology functions, but rather a demonstration of how this technology functions now that we ourselves are getting ever closer to working on a molecular scale. Today it would be impossible to master the coordinated assembly of a large number of these transistors on a circuit and it would also be impossible to create this on an industrial level.[24]
Due to the far-ranging claims that have been made about potential applications of nanotechnology, a number of concerns have been raised about what effects these will have on our society if realized, and what action if any is appropriate to mitigate these risks.
One area of concern is the effect that industrial-scale manufacturing and use of nanomaterials would have on human health and the environment, as suggested by nanotoxicology research. Groups such as the Center for Responsible Nanotechnology have advocated that nanotechnology should be specially regulated by governments for these reasons. Others counter that overregulation would stifle scientific research and the development of innovations which could greatly benefit mankind.
Other experts, including director of the Woodrow Wilson Center's Project on Emerging Nanotechnologies David Rejeski, have testified[25] that successful commercialization depends on adequate oversight, risk research strategy, and public engagement. More recently local municipalities have passed (Berkeley, CA) or are considering (Cambridge, MA) - ordinances requiring nanomaterial manufacturers to disclose the known risks of their products.
Longer-term concerns center on the implications that new technologies will have for society at large, and whether these could possibly lead to either a post scarcity economy, or alternatively exacerbate the wealth gap between developed and developing nations.
The external links in this article may not comply with Wikipedia's content policies or guidelines. Please improve this article by removing excessive or inappropriate external links. |
Robotics is the science and technology of robots, their design, manufacture, and application.[1] Robotics requires a working knowledge of electronics, mechanics and software, and is usually accompanied by a large working knowledge of many subjects.[2] A person working in the field is a roboticist.
Although the appearance and capabilities of robots vary vastly, all robots share the features of a mechanical, movable structure under some form of autonomous control. The structure of a robot is usually mostly mechanical and can be called a kinematic chain (its functionality being akin to the skeleton of the human body). The chain is formed of links (its bones), actuators (its muscles) and joints which can allow one or more degrees of freedom. Most contemporary robots use open serial chains in which each link connects the one before to the one after it. These robots are called serial robots and often resemble the human arm. Some robots, such as the Stewart platform, use closed parallel kinematic chains. Other structures, such as those that mimic the mechanical structure of humans, various animals and insects, are comparatively rare. However, the development and use of such structures in robots is an active area of research (e.g. biomechanics). Robots used as manipulators have an end effector mounted on the last link. This end effector can be anything from a welding device to a mechanical hand used to manipulate the environment.
Contents[hide] |
The word robotics was first used in print by Isaac Asimov, in his science fiction short story "Runaround", published in March 1942 in Astounding Science Fiction.[3] While it was based on the word "robot" coined by science fiction author Karel Čapek, Asimov was unaware that he was coining a new term. The design of electrical devices is called electronics, so the design of robots is called robotics.[4] Before the coining of the term, however, there was interest in ideas similar to robotics (namely automata and androids) dating as far back as the 8th or 7th century BC. In the Iliad, the god Hephaestus made talking handmaidens out of gold.[5] Archytas of Tarentum is credited with creating a mechanical Pigeon in 400 BC.[6] Robots are used in industrial, military, exploration, home making, and academic and research applications.[7]
The actuators are the 'muscles' of a robot; the parts which convert stored energy into movement. By far the most popular actuators are electric motors, but there are many others, some of which are powered by electricity, while others use chemicals, or compressed air.
Robots which must work in the real world require some way to manipulate objects; pick up, modify, destroy or otherwise have an effect. Thus the 'hands' of a robot are often referred to as end effectors[18], while the arm is referred to as a manipulator.[19] Most robot arms have replacable effectors, each allowing them to perform some small range of tasks. Some have a fixed manipulator which cannot be replaced, while a few have one very general purpose manipulator, for example a humanoid hand.
For the definitive guide to all forms of robot endeffectors, their design and usage consult the book "Robot Grippers" [22].
For simplicity, most mobile robots have four wheels. However, some researchers have tried to create more complex wheeled robots, with only one or two wheels.
If robots are to work effectively in homes and other non-industrial environments, the way they are instructed to perform their jobs, and especially how they will be told to stop will be of critical importance. The people who interact with them may have little or no training in robotics, and so any interface will need to be extremely intuitive. Science fiction authors also typically assume that robots will eventually communicate with humans by talking, gestures and facial expressions, rather than a command-line interface. Although speech would be the most natural way for the human to communicate, it is quite unnatural for the robot. It will be quite a while before robots interact as naturally as the fictional C3P0.
The mechanical structure of a robot must be controlled to perform tasks. The control of a robot involves three distinct phases - perception, processing and action (robotic paradigms). Sensors give information about the environment or the robot itself (e.g. the position of its joints or its end effector). Using strategies from the field of control theory, this information is processed to calculate the appropriate signals to the actuators (motors) which move the mechanical structure. The control of a robot involves path planning, pattern recognition, obstacle avoidance, etc. More complex and adaptable control strategies can be referred to as artificial intelligence.
The study of motion can be divided into kinematics and dynamics. Direct kinematics refers to the calculation of end effector position, orientation, velocity and acceleration when the corresponding joint values are known. Inverse kinematics refers to the opposite case in which required joint values are calculated for given end effector values, as done in path planning. Some special aspects of kinematics include handling of redundancy (different possibilities of performing the same movement), collision avoidance and singularity avoidance. Once all relevant positions, velocities and accelerations have been calculated using kinematics, methods from the field of dynamics are used to study the effect of forces upon these movements. Direct dynamics refers to the calculation of accelerations in the robot once the applied forces are known. Direct dynamics is used in computer simulations of the robot. Inverse dynamics refers to the calculation of the actuator forces necessary to create a prescribed end effector acceleration. This information can be used to improve the control algorithms of a robot.
In each area mentioned above, researchers strive to develop new concepts and strategies, improve existing ones and improve the interaction between these areas. To do this, criteria for "optimal" performance and ways to optimize design, structure and control of robots must be developed and implemented.
Authentic leadership is a function of those purposes, those commitments of service and the discipline anchoring them, not a function of self-expression.
Leadership is a function of a leader, follower, and situation that are appropriate for one another.
Leadership is a function of knowing yourself, having a vision that is well communicated, building trust among colleagues, and taking effective action to ...
Leadership is a function of team unity.
Transformative Risk Management by Andres Agostini
HAZARD is a function of a solvent's toxicity and the amount which volatilizes and ...
Publicado por Transformative Risk Management (Andres Agostini) en 13:54 0 comentarios
Etiquetas: Transformative Risk Management by Andres Agostini
Transformative Risk Management by Andres Agostini
HAZARD is a function of the way a chemical is. produced, used or discarded.
Publicado por Transformative Risk Management (Andres Agostini) en 13:53 0 comentarios
Etiquetas: Transformative Risk Management by Andres Agostini
Transformative Risk Management by Andres Agostini
HAZARD is a function of the probability.
Publicado por Transformative Risk Management (Andres Agostini) en 13:51 0 comentarios
Etiquetas: Transformative Risk Management by Andres Agostini
Transformative Risk Management by Andres Agostini
Risk is a function of hazard, exposure and dose. Even a hazardous material doesn't pose risk if there is no exposure.
Publicado por Transformative Risk Management (Andres Agostini) en 13:50 0 comentarios
Etiquetas: Transformative Risk Management by Andres Agostini
Transformative Risk Management by Andres Agostini
RISK is a function of the existance of the human beign, who lives in a social environment under permanent variation.
Publicado por Transformative Risk Management (Andres Agostini) en 13:48 0 comentarios
Etiquetas: Transformative Risk Management by Andres Agostini
Transformative Risk Management by Andres Agostini
RISK is a function of the chemical’s. toxicity and exposure to it.
Publicado por Transformative Risk Management (Andres Agostini) en 13:47 0 comentarios
Etiquetas: Transformative Risk Management by Andres Agostini
Transformative Risk Management by Andres Agostini
Risk is a function of price but risk can certainly be a matter of perception. Real risk, perceived risk and relative risk has flowed like a river through much of human history, slightly chaotic in nature and a little bit dangerous, no?
Enterprise Hazard Termination (Andres Agostini) Ich bin Singularitarian
REFLECTION ON "
The 'risk' posed by the 'hazard' is a function of the probability > (lightening strike . Conductor failure) and the consequence (death, > financial loss etc ...... The 'risk' posed by the 'hazard' is a function of the probability (lightening strike . ..... The potential hazard is a function of the following:. The exposure time (chronic or acute); The irradiance value (a function of both the image size and the ...... Seismic hazard is a function of the acceleration coefficient…..Hazard is a function of the way a chemical is. produced, used or discarded…..Relative Inhalation Hazard at Room Temperature: The relative inhalation hazard is a function of a solvent's toxicity and the amount which volatilizes and ...... The relative fire hazard is a function of. at what temperature the material will give off flammable vapors which when come in contact with a ...... Hazard is a function of the toxicity of a pesticide and the potential for exposure to it. We do not have control of the toxicity of a pesticide since ...... In the products liability context, the obviousness of a hazard is a function of “the typical. user’s perception and knowledge and whether the relevant ...... It fails to take into account the fact that the assessment of the hazard is a function of the inspection carried out by the environmental health officer ....... Toxicity: the inherent capacity of a substance to produce an injury or death; Hazard: hazard is a function of toxicity and exposure; the potential threat ...... A hazard is a function of both the magnitude of a physical event such as an earthquake and the state of preparedness of the society that is affected by it……Risk, for any specific hazard, is a function of the severity of possible harm and the probability of the occurrence of that harm…..work, where the crime hazard is a function of, some baseline hazard common to all individuals, and explanatory variables……Erosion hazard is a function of soil texture, crop residue and slope……The degree of fire hazard is a function of a number of factors, such as fuel load, building structure, ignition, and propagation of flames, ....... While it is clear that the degree of hazard is a function of both velocity (v). and depth (d) (e.g., Abt et al., 1989), and that a flood with depth but no ...... The level of risk posed by a hazard is a function of the probability of exposure to that hazard and the extent of the harm that would be ....... seismic hazard is a function of failure probability vs. PFA (peak floor. acceleration). This function varies….. The degree of hazard is a function of the frequency of the presence of the ignitable gas or vapor. That is, the more often the ignitable gas or vapor ...... The degree of hazard is a function of the differing toxicity of the various forms of beryllium and of the type and magnitude of beryllium exposure…..Thus, hazard is a function of survival time. The cumulative hazard at a given time is the hazard integrated over the whole time interval until the given ...... and exposure is a function of the nature of emission sources, paths and receivers, and hazard is a function of chemical attributes and their myriad health ....... Often "hazard" is a function of the very properties which can be harnessed to create value for society (e.g. chemical reactivity)….. Hazard is a function of toxicity and exposure. If the toxicity is low and the exposure is low, then the hazard will be low…..hazard, is a function of a set of independent variables…..The relative inhalation hazard is a function of a solvent's toxicity and the amount that volatilizes and thus is available for inhalation at room ...... Electrical shock hazard is a function of the current through the human body. Current can be directly limited by design, by additional current limiting…..Seismic hazard is a function of the size, or magnitude of an earthquake, distance from the earthquake, local soils, and other factors, and is independent of…..Our response to a real or imagined hazard is a function of our perception of that hazard. In many situations, hazards are ignored or disregarded…..The same thing; Related, in that vulnerability is a function of hazard; Related, in that hazard is a function of vulnerability; Not related…..hazard is a function of the intrinsic properties of the chemical that relate to persistence, bioaccumulation potential and toxicity……expected damage or loss from a given hazard. Is a function of hazard characteristics (probability, intensity, extent) and vulnerability ...... The estimation of risk for a given hazard is a function of the relative likelihood of its occurrence and the severity of harm resulting from its ...... hazard is a function of growth pressures and the interest rate, as well as other. variables (e.g. development fees) that vary over time but not over parcels ....... Hazard is a function of two primary variables, toxicity. and exposure; and is the probability that injury will result ...... In this system, hazard is a function of the frequency. of weather conditions favorable to WPBR infection. Hazard is defined as potential stand damage……Prevention of destructibility of a hazard is a function of effective. preparedness and mitigation measures following an objective analysis of the ...... hazard is a function of the relative likelihood of its occurrence and the severity of harm resulting from its consequences…….Hazard is a function of the organism and is related to its ability to cause negative effects on humans, animals, or the ecosystem…..So the answer is all the airplanes are creating vortexes but the real issues is that the hazard is a function of a lot of characteristics but mostly the ...... hazard is a function of the elapsed time since the last seismic event and the physical dimensions of the related active fault segment…..The degree of hazard is a function of the chemical / physical properties of the. substance(s) and the quantities involved…..Thus, the hazard is a function of event, use, and. actions taken to reduce losses…..The degree of hazard is a function of both the probability that. backflow may occur and the toxicity or pathogenicity of the contaminant involved……The estimated hazard is a function of unemployment duration, but in the model it is a function of human capital. In order to map duration into human capital ...... The risk of each hazard is a function of the contaminant source, containment, transport. pathway and the receptor……The hazard is a function of current magnitude and time or the integral of current. This description of the background of the invention has emphasized ground ...... The level of risk posed by a hazard is a function of the probability of exposure to that hazard and the extent of the harm that would be caused by that ...... Hazard is a function of exposure and effect. Hazard assessment can be used to either refute or quantify potentially harmful effects, …..The hazard is a function of rainfall erosivity, slope (gradient and length), soil erodibility and the amount of vegetative protection on the surface……The estimation of risk for any given hazard is a function of the relative likelihood of its occurrence, and the severity of harm resulting from its ....... From durations to human capital: The estimated hazard is a function of unemployment. duration, but in the model it is a function of human capital…..
By Andres Agostini
Ich Bin Singularitarian!
www.geocities.com/agosbio/a.html
Management's Best Practices (Andres Agostini) Ich bin Singularitarian
MANAGEMENT REFLECTIONS (BUSINESS-plus)
Moral responsibility in corporate medical management is a function of the exercise of authority over different aspects of the medical decision making ...... management is a function of hazards mitigation and vulnerability reduction. This is a very simple understanding for people in disaster studies……Effective management is a function of developing proper individual or team performance measures and then monitoring those ....... Natural resource management is a function of managing County parks, reserves, and recreation areas. The Department of Parks and Recreation has developed and ....... Emergency management is a function of the department as well. This is a co-managed function of both the City of
By Andres Agostini
Ich Bin Singularitarian!
On the Future of Quality !!!
"Excellence is important. To everyone excellence means something a bit different. Do we need a metric for excellence? But, Why do I believe that the qualitative side of it is more important than its numericalization. By the way, increasing tsunamis of vanguard sciences and corresponding technologies to be applied bring about the upping of the technical parlance.
These times as Peter Schwartz would firmly recommend require to “pay” the highest premium for leading knowledge.
“Chindia” (
Yes, simple is beautiful, but horrendous when this COSMOS is overwhelmed with paradoxes, contradictions, and predicaments. And you must act to capture success and, overall, to make sustainable.
Quality is crucial. Benchmarks are important but refer to something else, though similar. But Quality standards, as per my view, would require a discipline to be named “Systems Quality Assurance.” None wishes defects/waste.
But having on my hat and vest of strategy and risk management, the ultimate best practices of quality –in many settings- will not suffice. Got it add, (a) Systems Security, (b) Systems Safety, (c) Systems Reliability, (d) Systems Strategic Planning/Management and a long “so forth.”
When this age of changed CHANGE is so complex like never ever –and getting increasingly more so- just being truly excellent require, without a fail, many more approaches and stamina."
Posted by Andres Agostini at February 22, 2008 9:18 PM
Posted by Andres Agostini on This I Believe! (AATIB) at 6:25 PM 0 comments
Labels: www.AndresAgostini.blogspot.com, www.andybelieves.blogspot.com
Commenting on the Future of Quality….
Excellence is important. To everyone excellence means something a bit different. Do we need a metric for excellence? But, Why do I believe that the qualitative side of it is more important than its numericalization. By the way, increasing tsunamis of vanguard sciences and corresponding technologies to be applied bring about the upping of the technical parlance.
These times as Peter Schwartz would firmly recommend require to “pay” the highest premium for leading knowledge.
“Chindia” (
Yes, simple is beautiful, but horrendous when this COSMOS is overwhelmed with paradoxes, contradictions, and predicaments. And you must act to capture success and, overall, to make it sustainable and fiscally sound.
Quality is crucial. Benchmarks are important but refer to something else, though similar. But Quality standards, as per my view, would require a discipline to be named “Systems Quality Assurance.” None wishes defects/waste.
But having on my hat and vest of strategy and risk management, the ultimate best practices of quality –in many settings- will not suffice. Got it add, (a) Systems Security, (b) Systems Safety, (c) Systems Reliability, (d) Systems Strategic Planning/Management and a long “so forth.”
When this age of changed CHANGE is so complex like never ever –and getting increasingly more so- just being truly excellent require, without a fail, many more approaches and stamina.
Posted by Andres Agostini at February 22, 2008 9:18 PM
Posted by Andres Agostini on This I Believe! (AATIB) at 6:36 PM 0 comments
Labels: www.AndresAgostini.blogspot.com, www.andybelieves.blogspot.com
"Clearly, hard work is extremely important. There is a grave lack of practices of this work philosophy in the battlefield. Practicing, practicing and practicing is immeasurably relevant.
Experience accumulated throughout the years is also crucial, particularly when one is always seeking mind-expansion activities.
With it practical knowledge comes along. When consulting and training, yes, you’re offering ideas to PRESENT clients with CHOICES/OPTIONS to SOLUTIONS.
How to communicate with the client is extremely difficult. Nowadays, some technical solutions that the consultant or advisor must implement has a depth that will shock the client unless there is a careful and prudent preparation/orientation of the targeted audience.
Getting to know the company culture is another sine qua non. The personal cosmology of each executive or staff involved on behalf of the client is even more important. Likewise, the professional service expert must do likewise with the CEO, and Chairman.
In fact, in your notes, a serious consultant must have an unofficial, psychological profile of the client representatives. One has to communicate unambiguously, but sometimes helps to adapt your lexicon to that of the designated client.
From interview one –paying strong attention and listening up to the customer– the advisor must give choices while at always being EDUCATIONAL, INFORMATIVE, and, somehow, FORMATIVE/INDUCTIVE. That’s the problem.
These times are not those. When the third party possesses the knowledge, skill, know-how, technology, he/she now must work much more in ascertaining you lock in your customer’s mind and heart with yours.
Before starting the CONSULTING EFFORT, I personally like to have a couple of informal meetings just to listen up and listen up.
Then, I forewarn them that I will be making a great number of questions. Afterwards, I take extensive notes and start crafting the strategy to build up rapport with this customer.
Taking all the information given informally in advance by the client, I make an oral presentation to assure I understood what the problem is. I also take this opportunity to capture further information and to relax everyone, while trying to win them over legitimately and transparently.
Then, if I see, for instance, that they do not know how to call/express lucidly/with accurateness their problem, I ask questions. But I also offer real-life examples of these probable problems with others clients.
The opportunity is absolutely vital to gauge the level of competency of the customer and knowledge or lack of knowledge about the issue. Passing all of that over, I start, informally, speaking of options to get the customer involved in peaking out the CHOICE (the solution) to watch for initial client’s reactions.
In my case and in many times, I must not only transfer the approaches/skills/technologies, but also institute and sustain it to the 150% satisfaction of my clients.
Those of us, involved with Systems Risk Management(*) (“Transformative Risk Management”) and Corporate Strategy are obliged to scan around for problems, defects, process waste, failure, etc. WITH FORESIGHT.
Once that is done and still “on guard,” I can highlight the opportunity (upside risk) to the client.
Notwithstanding, once you already know your threats, vulnerabilities, hazards, and risks (and you have a master risk plan, equally contemplated in your business plan), YOU MUST BE CREATIVE SO THAT “HARD WORK” MAKES A UNIQUE DIFFERENCE IN YOUR INDUSTRY.
While at practicing, do so a zillion low-cost experiments. Do a universe of Trial and Errors. Commit to serendipity and/or pseudo-serendipity. In the mean time, and as former
(*) It does not refer at all to insurance, co-insurance, reinsurance. It is more about the multidimensional, cross-functional management of business processes to be goals and objectives compliant."
Posted by Andres Agostini at February 23, 2008 4:56 PM
Posted by Andres Agostini on This I Believe! (AATIB) at 1:58 PM 0 comments
Labels: www.AndyBelieves.blogspot.com/
“I like the dreams of the future better than the history of the past.” (
Friedrich Wilhelm Nietzsche, the German philosopher, reminds one, “It is our future that lays down the law of our work.” While Churchill tells us, “the empires of the future belong to the [prepared] mind.”
Last night I was reading the text book “Wikinomics.” Authors say that in the next 50 years applied science will be much more evolved than that of the past 400 years. To me, and because of my other reaserch, they are quite conservative. Vernor Vinge, the professor of mathematics, recalls us about the “Singularity,” primarily technological and secondarily social (with humans that are BIO and non BIO and derivatives of the two latter, i.e. in vivo + in silico + in quantum + in non spiritus). Prof. Vinge was invited by NASA on that occasion. If one like to check it out, Google it.
Clearly, Quality Assurance progress has been made by Deming, Juran, Six Sigma, Kaisen (
The compilation of approaches is fun though must be extremely cohesive, congruent, and efficacious.
And if the economy grows more complex, many more methodologies I will grab. I have one of my own that I called “Transformative Risk Management,” highly based on the breakthrough by Military-Industrial (-Technological) Complex. Chiefly, with the people concerned with nascent NASA (Mercury, Saturn, Apollo) via Dr. Wernher von Braun, then engineer in chief. Fortunately, my mentor, a “doctor in science” for thirteen years was von Braun’s risk manager. He’s now my supervisor.
The Military-Industrial (-Technological) Complex had a great deal of challenges back in 1950. As a result, many breakthroughs were brought about. Today, not everyone seems to know and/or institute these findings. Some do as ExxonMobil. The text book “Powerful Times” attributes to
In addition, the grandfather of in-depth risk analyses is one that goes under many names beside Hazard Mode and Effect Analysis (HMEA). It has also been called Reliability Analysis for Preliminary Design (RAPD), Failure Mode and Effect Analysis (FMEA), Failure Mode, Effect, and Criticality Analysis (FMECA), and Fault Hazard Analysis (FHA). All of these – just to give an example – has to be included in your methodical toolkit alongside with Deming’, Juran’, Six Sigma, Kaisen’s.
These fellow manage with what they called “the omniscience perspective,” that is, the totality of knowledge. Believe me, they do mean it.
Yes, hard-working, but knowing what you’re doing and thinking always in the unthinkable, being a foresight-er, and assimilating documented “lesson learned” from previous flaws. In the mean time, Sir Francis Bacon wrote, “He that will not apply remedies must expect new evils; for time is the greatest innovator.”
(*) A "killer" to "common sense" activist. A blessing to rampantly unconventional- wisdom practitioner.
For the “crying” one, everything has changed. It has changed (i) CHANGE, (ii) Time, (iii) Politics/Geopolitics, (iv) Science and technology (applied), (v) Economy, (vi) Environment (amplest meaning), (vii) Zeitgeist (spirit of times), (viii) Weltstanchaung (conception of the world), (ix) Zeitgeist-Weltstanchaung’s Prolific Interaction, etc. So there is no need to worry, since NOW, —and everyday forever (kind of...)—there will be a different world, clearly if one looks into the sub-atomic granularity of (zillion) details. Unless you are a historian, there is no need to speak of PAST, PRESENT, FUTURE, JUST TALK ABOUT THE ENDLESSLY PERENNIAL PROGRESSION. Let’s learn a difficult lesson easily NOW.
“Study the science of art. Study the art of science. Picture mentally… Draw experientially. Succeed through endless experimentation… It’s recommendable to recall that common sense is much more than an immense society of hard-earned practical ideas—of multitudes of life-learned rules and tendencies, balances and checks. Common sense is not just one (1), neither is, in any way, simple.” (Andres Agostini)
Dwight D. Eisenhower, speaking of leadership, said: “The supreme quality for leadership is unquestionably integrity. Without it, no real success is possible, no matter whether it is on a section gang, a football field, in an army, or in an office.”
“…to a level of process excellence that will produce (as per GE’s product standards) fewer than four defects per million operations…” — Jack Welch (1998).
In addition to WORKING HARD and taking your “hard working” as you beloved HOBBY and never as a burden, one may wish to institute, as well, the following:
1.- Servitize.
2.- Productize.
3.- Webify.
4.- Outsource (strategically “cross” sourcing).
5.- Relate your core business to “molutech” (molecular technology).
Search four primary goals (in case a reader is interested):
A.- To build trust.
B.- To empower employees.
C.- To eliminate unnecessary work.
D.- To create a new paradigm for your business enterprise, a [beyond] “boundaryless” organization.
E.- Surf dogmas; evade sectarian doctrines.
Posted by Andres Agostini at February 27, 2008 7:54 PM
Ad agencies cannot make up for the shortcomings of the business enterprise. Those shortcomings consequential of a core business sup-optimally managed. Get the business optimum first. Then, communicate it clearly, being sensible to the community at large.
A funny piece is one thing. To make fun of others is another (terrible). To be creative in the message is highly desirable. If the incumbent’s corporation has unique attributes and does great business, just say it comprehensibly without manipulating or over-promising.
Some day soon the subject matter on VALUES is going to be more than indispensable to keep global society alive. The rampant violations of the aforementioned values should be death-to-life matter of study by ad agencies without a fail.
The global climate change, the flu pandemia (to be), the geology (earthquakes, volcanoes, tsunamis), large meteorites, nuke wars are all among the existential risks. To get matters worse, value violations by the ad agencies, mass media, and the rest of the economy would easily qualify as an existential risk.
Thank you all for your great contributions and insightfulness. Take a Quality Assurance Program, (e.g.), to be instituted in a company these days, century 2008. One will have to go through tremendous amounts of reading, writing, drawing, spread-sheeting, etc. Since the global village is the Society of Knowledge, these days, to abate exponential complexity, you must not only have to embrace it fully, you have to be thorough at all times to meet the challenge. One must also pay the price of an advanced global economy that is in increasingly perpetual innovation. Da Vinci, in a list of the 10 greatest minds, was # 1. Einstein was # 10. Subsequently, it’s highly recommendable, if one might wish, to pay attention to “Everything should be made as simple [from the scientific stance] as possible, but not simpler.” Mr. Peters, on the other hand, has always stressed the significance to continuously disseminate new ideas. He is really making an unprecedented effort in that direction. Another premium to pay, it seems to be extremely “thorough” (Trump).
Posted by Andres Agostini at February 28, 2008 3:11 PM
We need, globally, to get into the “strongest” peaceful mind-set the soonest. Not getting to peace status via waging wars. Sometimes, experts and statesmen may require “chirurgical interventions,” especially under the monitoring of the U.N. diplomacy are called to be reinvented and taken to the highest possible state of refinement. More and more diplomacy and more and more refinement. Then, universal and aggressive enhance diplomacy instituted.
Posted by Andres Agostini at February 29, 2008 4:02 PM
I appreciate current contributions. I’d like to think that the nearly impossible is in you way (while you’re emphatically self-driven for accomplishments) with determined aggressive towards the ends (objectives, goals) to be met. Churchill offers a great deal of examples of how an extraordinary leader works out.
Many lessons to be drawn out from him, without a doubt. Churchill reminds, as many others, that (scientific) knowledge is power. Napoleon, incidentally, says that a high-school (lyceum) graduate, must study science and English (lingua franca).
So, the “soft knowledge” (values) plus the “hard knowledge” (science, technology) must converge into the leader (true statesman). Being updated in values and science and technology in century 21 –to be en route to being 99% success compliant- requires, as well, of an open mind (extremely self-critical) that is well prepared (Pasteur).
Posted by Andres Agostini at February 29, 2008 4:19 PM
My experience tells me that every client must be worked out to be your true ally. When you’re selling high-tech/novel technologies/products/services, one must do a lot of talking to induce the customer into a menu of probable solutions. The more the complications, the more the nice talk with unambiguous language.
If that phase succeeds, it’s necessary to make oral/document presentations to the targeted client. Giving him – while at it- a number of unimpeachable examples of the real life (industry by industry) will get the customer more to envision you as an ally than just a provider.
These continuous presentations are, of course, training/indoctrination to the customer, so that he understands better his problem and the breadth and scope of the likely solutions. If progress is made in this phase, one can start working out, very informally and distensibly, the clauses of the contract, particularly those that are daring. One by one.
When each one is finally approved by both. Assemble and get approved and implemented the corresponding contract. Then, keep a close (in-person) contact with your customer.
Posted by Andres Agostini at February 29, 2008 4:32 PM
I like to meet personally and working together with my peers. So, I can also work through the Web as I am on my own with added benefits of some privacy and other conveniences. A mix of both –as I think- is optimal.
How can one slow down the global economy trends? The more technological elapsed time get us, the more connected and wiki will we all be. Most of the interactions I see/experience on the virtual world with extreme consequences in the real world.
I think it’s nice and productive to exchange ideas over a cappuccino. The personal contact is nice. Though, it gets better where is less frequent. So, when it happens, the person met becomes a splendid occasion.
As things get more automated, so will get we. I, as none of you, invented the world. Automations will get to work more than machines. Sometimes, it of a huge help to get an emotional issue ventilated through calm, discerned e-mails.
Regardless of keeping on embracing connectedness (which I highly like), I would say one must make in-person meetings a must-do. Let's recall that we are en route to Vernor Vinge's "Singularity."
Posted by Andres Agostini at February 29, 2008 4:46 PM
The prescription to make a true talent as per the present standards is diverse. Within the ten most important geniuses, there is Churchill again. He is the (political) statesman # 1, from da Vinci’s times to the current moment. In one book (Last Lion), it is attributed to Churchill saying that a New Yorker –back then–transferred him some methodology to capture geniality.
A great deal of schooling is crucial. A great deal of self-schooling is even more vital. Being experienced in different tenures and with different industries and with different clients helps beyond belief.
Study/researching cross-reference (across the perspective of omniscience) helps even more. Seeking mentors and tutors helps. Get trained/indoctrinated in various fields does so too. Hiring consultants for your personal, individual induction/orientation add much.
Got it have an open mind with a gusto for multidimensionality and cross-functionality, harnessing and remembering useful knowledge all over, regardless of the context. I have worked on these and published some “success metaphors” in the Web, both text and video. Want it? Google it!
Learning different (even opposed) methodologies renders the combined advantages of all of the latter into a own, unique multi-approach of yours.
Most of these ideas can be marshaled concurrently.
Posted by Andres Agostini at February 29, 2008 5:11 PM
Experience accumulated throughout the years is also crucial, particularly when one is always seeking mind-expansion activities.
With it practical knowledge comes along. When consulting and training, yes, you’re offering ideas to PRESENT clients with CHOICES/OPTIONS to SOLUTIONS.
How to communicate with the client is extremely difficult. Nowadays, some technical solutions that the consultant or advisor must implement has a depth that will shock the client unless there is a careful and prudent preparation/orientation of the targeted audience.
Getting to know the company culture is another sine qua non. The personal cosmology of each executive or staff involved on behalf of the client is even more important. Likewise, the professional service expert must do likewise with the CEO, and Chairman.
In fact, in your notes, a serious consultant must have an unofficial, psychological profile of the client representatives. One has to communicate unambiguously, but sometimes helps to adapt your lexicon to that of the designated client.
From interview one –paying strong attention and listening up to the customer– the advisor must give choices while at always being EDUCATIONAL, INFORMATIVE, and, somehow, FORMATIVE/INDUCTIVE. That’s the problem.
These times are not those. When the third party possesses the knowledge, skill, know-how, technology, he/she now must work much more in ascertaining you lock in your customer’s mind and heart with yours.
Before starting the CONSULTING EFFORT, I personally like to have a couple of informal meetings just to listen up and listen up.
Then, I forewarn them that I will be making a great number of questions. Afterwards, I take extensive notes and start crafting the strategy to build up rapport with this customer.
Taking all the information given informally in advance by the client, I make an oral presentation to assure I understood what the problem is. I also take this opportunity to capture further information and to relax everyone, while trying to win them over legitimately and transparently.
Then, if I see, for instance, that they do not know how to call/express lucidly/with accurateness their problem, I ask questions. But I also offer real-life examples of these probable problems with others clients.
The opportunity is absolutely vital to gauge the level of competency of the customer and knowledge or lack of knowledge about the issue. Passing all of that over, I start, informally, speaking of options to get the customer involved in peaking out the CHOICE (the solution) to watch for initial client’s reactions.
In my case and in many times, I must not only transfer the approaches/skills/technologies, but also institute and sustain it to the 150% satisfaction of my clients.
Those of us, involved with Systems Risk Management(*) (“Transformative Risk Management”) and Corporate Strategy are obliged to scan around for problems, defects, process waste, failure, etc. WITH FORESIGHT.
Once that is done and still “on guard,” I can highlight the opportunity (upside risk) to the client.
Notwithstanding, once you already know your threats, vulnerabilities, hazards, and risks (and you have a master risk plan, equally contemplated in your business plan), YOU MUST BE CREATIVE SO THAT “HARD WORK” MAKES A UNIQUE DIFFERENCE IN YOUR INDUSTRY.
While at practicing, do so a zillion low-cost experiments. Do a universe of Trial and Errors. Commit to serendipity and/or pseudo-serendipity. In the mean time, and as former
(*) It does not refer at all to insurance, co-insurance, reinsurance. It is more about the multidimensional, cross-functional management of business processes to be goals and objectives compliant."
Posted by Andres Agostini at February 23, 2008 4:56 PM
“It is with an iron will that they embark on the most daring of all endeavors….to meet the shadowy future without fear and conquer to unknown."
© 2007 Andres Agostini. All Rights Reserved.
1 comment:
Great Information! Its looking Nice. Safety is most important. Useful for me to develop my knowledge. Thank you!
NEBOSH Process Safety Management Certificate
Nebosh IGC in Chennai
International Safety Courses
Nebosh Courses In Chennai
NEBOSH HEALTH AND SAFETY LEADERSHIP
Fire Safety Course In Chennai
Post a Comment