Search This Blog


Privacy Policy . Powered by Blogger.

Social Icons

Popular Posts


Featured Posts


The first airplane

Thursday, 6 June 2013

The first airplane that was flown was a glider. A glider is a non-motorized flying machine (and very hard to control.) Early gliders were launched from high places like cliffs and floated on the wind to the ground.

A man named Sir George Cayel made the first glider that actually flew. His first glider didn't have passengers or a pilot. It was too small and could not fit anyone in it. He made another that flew his coachman across a small valley. This glider was not launched from a cliff.

During 1890 while Orville and Wilbur Wright were working in a bicycle shop, the Wright Brothers got interested in flying. They learned that bicycles that were closer to the ground were faster. They read all the books they could find about airplanes to learn more. They then began building gliders near Kitty Hawk, North Carolina. The Wright Brothers improved the glider. In 1899 they made a large, two wing kite. After experimenting for a while on unmanned gliders, they made a glider where the pilot would control the airplane in the air. After working on glider experiments they found out how to steer a plane while in flight by developing a rudder (the tail of the plane) and flaps on the wings. With the rudder and the flaps, the pilot could control the direction of the airplane and the height.

In December of 1903, the Wright Brothers became the first people to successfully fly a plane with a person in it. The plane flew one hundred twenty feet and flew only about twelve seconds. They had three successful flights that day, but Wilbur made the longest flight of 892 feet and stayed up for about 59 seconds. In 1903 the Wright Brothers made their first powered airplane that they named the "flyer." It was a biplane (two winged plane) that had a 12 horse power engine that they had built themselves. The wings were 40 feet wide, wooden, and covered with cotton cloth. The pilot would lay on the lower wing on his stomach and steer the plane. In 1908 the Wright Brothers finally made a plane that could fly for more that one and a half hours.

Improvements to Airplanes

In 1843 William S. Henson, an inventor, patented plans for an airplane after trying to build a model airplane. When those plans failed he gave up on airplanes. His friend, John Stingfellow, tried making a model off of Henson's model and succeeded. The plane launched, but could only stay in the air for a short time.

In 1890 Cl`ement Ader took off on the first steam powered plane (a plane with an engine, unlike the glider) that he had built himself. What was very unlucky about that was he could not fly it because he could not control it. Around the same time another inventor, Hiram Maxiam, built a steam powered flying machine. He tested his airplanes, but never really got them off the ground because they were too heavy and he could not control the flight.

During 1894 Samuel Langley flew a steam powered plane and went 0.8 kilometers in one and a half minutes. Once Langley made another airplane, he got a pilot to steer once on October, 7 and once on December, 8, but sadly the plane crashed in a lake.

U.S Army Lieutenant Thomas E. Selfridge was the first person killed in a plane crash. The military wanted to see how good the Wright Brothers` airplane was for flying. On September 17, 1908, Selfridge went up in a plane with Orville Wright. When they were 75 feet in the air a propeller broke. The plane crashed, which killed Thomas and left Orville injured, but the Wright Brothers still did not give up. In 1909, they got a contract from the military to build the first military plane.

In 1911, Calbriath Rodgers made the first flight across the United States. He flew from Sheepshead Bay, New York to Long Beach, California. During the 84 days of flying, Rodgers crashed at least 70 times. He had to replace almost every part of the plane before he reached Long Beach. All together this journey took 3 days, 10 hours, and 24 minutes of time spent in the air.

Airplane travel has improved a great deal since the first efforts of the Wright Brothers. Airplanes now travel thousands of miles at altitudes of more than 7 miles, carrying over three hundred passengers. Those passengers rest in comfortable seats instead of on their stomachs like Orville did. Jet engines have replaced propellers and speeds are greater than 600 miles per hour. Not even the Wright brothers could have imagined what air travel would be like today.


Thursday, 30 May 2013

Viruses have existed as long as life has been on earth.
Early references to viruses

Early references to viral infections include Homer’s mention of “rabid dogs”. Rabies is caused by a virus affecting dogs. This was also known in Mesopotamia.

Polio is also caused by a virus. It leads to paralysis of the lower limbs. Polio may also be witnessed in drawings from ancient Egypt.

In addition, small pox caused by a virus that is now eradicated from the world also has a significant role in history of S. and Central America.
Virology – the study of viruses

The study of viruses is called virology. Experiments on virology began with the experiments of Jenner in 1798. Jenner did not know the cause but found that that individuals exposed to cow pox did not suffer from small pox.

He began the first known form of vaccination with cow pox infection that prevented small pox infection in individuals. He had not yet found the causative organism or the cause of the immunity as yet for either cow pox or small pox.
Koch and Henle

Koch and Henle founded their postulates on microbiology of disease. This included that:
the organism must regularly be found in the lesions of the disease
it must be isolated from diseased host and grown in pure culture
inoculation of such a pure organism into a host should initiate the disease and should be recovered from the secondarily infected organism as well

Viruses do not confer to all of these postulates.
Louis Pasteur

In 1881-1885 Louis Pasteur first used animals as model for growing and studying viruses. He found that the rabies virus could be cultured in rabbit brains and discovered the rabies vaccine. However, Pasteur did not try to identify the infectious agent.
The discovery of viruses

1886-1903 – This period was the discovery period where the viruses were actually found. Ivanowski observed/looked for bacteria like substance and in 1898, Beijerink demonstrated filterable characteristic of the virus and found that the virus is an obligate parasite. This means that the virus is unable to live on its own.
Charles Chamberland and filterable agents

In 1884, the French microbiologist Charles Chamberland invented a filter with pores smaller than bacteria. Chamberland filter-candles of unglazed porcelain or made of diatomaceous earth (clay)-kieselguhr had been invented for water purification. These filters retained bacterium, and had a pore size of 0.1-0.5 micron. Viruses were filtered through these and called “filterable” organisms. Loeffler and Frosch (1898) reported that the infectious agent of foot and mouth diseases virus was a filterable agent.

In 1900 first human disease shown to be caused by a filterable agent was Yellow Fever by Walter Reed. He found the yellow fever virus present in blood of patients during the fever phase. He also found that the virus spread via mosquitoes. In 1853 there was an epidemic in New Orleans and the rate of mortality from this infection was as high as 28%. Infectivity was controlled by destroying mosquito populations
Trapping viruses

In the 1930's Elford developed collodion membranes that could trap the viruses and found that viruses had a size of 1 nano meter. In 1908, Ellerman and Bang demonstrated that certain types of tumors (leukemia of chicken) were caused by viruses. In 1911 Peyton Rous discovered that non-cellular agents like viruses could spread solid tumors. This was termed Rous Sarcoma virus (RSV).

The most important discovery was that of the Bacteriophage era. In 1915 Twort was working with vaccinia virus and found that the viruses grew in cultures of bacteria. He called then bacteriophage. Twort abandoned this work after World War I. In 1917, D'Herelle, a Canadian, also found similar bacteriophages.
Images of viruses

In 1931 the German engineers Ernst Ruska and Max Knoll found electron microscopy that enabled the first images of viruses. In 1935, American biochemist and virologist Wendell Stanley examined the tobacco mosaic virus and found it to be mostly made from protein. A short time later, this virus was separated into protein and RNA parts. Tobacco mosaic virus was the first one to be crystallised and whose structure could therefore be elucidated in detail.
Molecular biology

Between 1938 and 1970 virology developed by leaps and bounds into Molecular biology. The 1940's and 1950's was the era of the Bacteriophage and the animal virus.

Delbruck considered father of modern molecular biology. He developed the concepts of virology in the science. In 1952 Hershey and Chase showed that it was the nucleic acid portion that was responsible for the infectivity and carried the genetic material.

In 1954 Watson and Crick found the exact structure of DNA. Lwoff in 1949 found that virus could behave like a bacterial gene on the chromosome and also found the operon model for gene induction and repression. Lwoff in 1957 defined viruses as potentially pathogenic entities with an infectious phase and having only one type of nucleic acid, multiplying with their genetic material and unable to undergo binary fission.

In 1931, American pathologist Ernest William Goodpasture grew influenza and several other viruses in fertilised chickens' eggs. In 1949, John F. Enders, Thomas Weller, and Frederick Robbins grew polio virus in cultured human embryo cells, the first virus to be grown without using solid animal tissue or eggs. This enabled Jonas Salk to make an effective polio vaccine.

Era of polio research was next and was very important as in 1953 the Salk vaccine was introduced and by 1955 poliovirus had been crystallized. Later Sabin introduced attenuated polio vaccine.

In the 1980’s cloning of viral genes developed, sequencing of the viral genomes was successful and production of hybridomas was a reality. The AIDS virus HIV came next in the 1980’s. Further uses of viruses in gene therapy developed over the next two decades.

Solar Energy

Sunday, 19 May 2013

The Basics:

Solar energy technologies convert the sun’s light into usable electricity or heat. Solar energy systems can be divided into two major categories: photovoltaic and thermal. Photovoltaic cells produce electricity directly, while solar thermal systems produce heat for buildings, industrial processes or domestic hot water. Thermal systems can also generate electricity by operating heat engines or by producing steam to spin electric turbines. Solar energy systems have no fuel costs, so most of their cost comes from the original investment in the equipment. The total installed costs of solar applications vary depending on the type of financing used. Solar photovoltaics generally range from $6-$10 per watt installed, or $12,000-$30,000 for a typical 2-3 kilowatt residential-scale system. A solar hot water system sized for a typical home is much cheaper and costs between $3,500 and $8,000 depending on the size and type of the system (above prices exclude any incentives or rebates). 
Resource Potential:

The Northwest receives more than enough sunlight to meet our entire energy needs for the foreseeable future. As the map above illustrates, the Northwest’s highest potential is in southeastern Oregon and southern Idaho; however, there are no “bad” solar sites—even the rainiest parts of the Northwest receive almost half as much solar energy as the deserts of California and Arizona, and they receive more than Germany, which has made itself a solar energy leader.
Photovoltaic Cells:

Photovoltaics (PVs) convert sunlight directly into electricity, using semiconductors made from silicon or other materials. Photovoltaic modules mounted on homes in the Northwest can produce electricity at a levelized cost of 20-60 cents per kilowatt-hour (kWh) before incentives. Incentives can bring the levelized cost down considerably to 10-20 cents per kWh.
PVs generate power on a much smaller scale than traditional utility power plants, so they can often provide high-value electricity exactly where and when it is needed. PVs are often the best choice for supplying power for remote, “off-grid” sites or in situations where the transmission or distribution system would otherwise need to be upgraded in order to meet peak demands. Distribution line extensions of more than half a mile are generally more expensive than investing in a PV system for a typical home.
Other cost-effective PV applications include building-integrated power generation, meeting high summer demand for electricity (e.g., air conditioning), pumping water, lighting signs and powering equipment used for communications, safety or signaling.
Prices for photovoltaics are falling as markets expand. Solar PV demand has grown consistently by 20-25% per year over the past 20 years while solar cell prices fell from $27 per watt of capacity in 1982 to less than $4 per watt today.
Direct Thermal:

Direct-use thermal systems are usually located on individual buildings, where they use solar energy directly as a source of heat. The most common systems use sunlight to heat water for houses or swimming pools, or use collector systems or passive solar architecture to heat living and working spaces. These systems can replace electric heating for as little as three cents per kilowatt-hour, and utility and state incentives reduce the costs even further in some cases.
Environmental Impacts:

Solar power is an extremely clean way to generate electricity. There are no air emissions associated with the operation of solar modules or direct application technologies. Residential-scale passive construction, photovoltaic, solar water heating, and other direct applications reduce power generation from traditional sources and the associated environmental impacts.
Net Metering:

Utilities in all four Northwestern states offer net metering programs, which make it easy for customers to install solar electric systems at their homes or businesses. In a net metering program, customers feed extra power generated by their solar equipment during the day into the utility’s electrical grid for distribution to other customers. Then, at night or other times when the customer needs more power than their system generates, the building draws power back from the utility grid.
Net metering allows customers to install solar equipment without the need for expensive storage systems, and without wasting extra power generated when sunlight is at its peak. Such programs also provide a simple, standardized way for customers to use solar systems while retaining access to utility-supplied power.
In most net metering programs, the utility installs a special ‘dual-reading’ meter at the customers building which keeps track of both energy consumed by the building, and energy generated by the solar array. The customer is billed only for the net amount of electricity that they draw from the utility, effectively receiving the utility’s full retail price for the electricity they generated themselves.
Annual U.S. Solar Installations
by Technology:

Source: Interstate Renewable Energy Council 6
Net metering is available from utilities throughout Oregon and Washington, and law requires most Montana utilities to offer it as well. Additionally, Idaho Power and Rocky Mountain Power offer net metering in Idaho in accord with a Public Utilities Commission rule.
Incentive Programs in the Northwest:

Every state in the Northwest offers incentives for solar energy development. Oregon, Idaho and Montana all offer low-interest loans and substantial tax credits for solar systems bought by businesses, individuals or governments. Washington now offers a production incentive of $0.15/kilowatt-hour or more for electricity from solar energy, depending on where the technology was manufactured. Montana and Oregon also exempt solar systems from property tax assessment, while Idaho and Washington exempt solar system purchases from sales taxes. Many local utilities and regional organizations also provide incentives. For example, the Energy Trust of Oregon offers additional rebates and loans to customers of Oregon’s two largest utilities and many utilities offer additional rebates, loans, or production incentives for solar energy systems.


Saturday, 18 May 2013



Although the science of robotics only came about in the 20th century, the history of human-invented automation has a much lengthier past. In fact, the ancient Greek engineer Hero of Alexandria, produced two texts, Pneumatica and Automata, that testify to the existence of hundreds of different kinds of “wonder” machines capable of automated movement. Of course, robotics in the 20th and 21st centuries has advanced radically to include machines capable of assembling other machines and even robots that can be mistaken for human beings.

The word robotics was inadvertently coined by science fiction author Isaac Asimov in his 1941 story “Liar!” Science fiction authors throughout history have been interested in man’s capability of producing self-motivating machines and lifeforms, from the ancient Greek myth of Pygmalion to Mary Shelley’s Dr. Frankenstein and Arthur C. Clarke’s HAL 9000. Essentially, a robot is a re-programmable machine that is capable of movement in the completion of a task. Robots use special coding that differentiates them from other machines and machine tools, such as CNC. Robots have found uses in a wide variety of industries due to their robust resistance capabilities and precision function. 

Historical Robotics 

Many sources attest to the popularity of automatons in ancient and Medieval times. Ancient Greeks and Romans developed simple automatons for use as tools, toys, and as part of religious ceremonies. Predating modern robots in industry, the Greek God Hephaestus was supposed to have built automatons to work for him in a workshop. Unfortunately, none of the early automatons are extant. 

In the Middle Ages, in both Europe and the Middle East, automatons were popular as part of clocks and religious worship. The Arab polymath Al-Jazari (1136-1206) left texts describing and illustrating his various mechanical devices, including a large elephant clock that moved and sounded at the hour, a musical robot band and a waitress automaton that served drinks. In Europe, there is an automaton monk extant that kisses the cross in its hands. Many other automata were created that showed moving animals and humanoid figures that operated on simple cam systems, but in the 18th century, automata were understood well enough and technology advanced to the point where much more complex pieces could be made. French engineer Jacques de Vaucanson is credited with creating the first successful biomechanical automaton, a human figure that plays a flute. Automata were so popular that they traveled Europe entertaining heads of state such as Frederick the Great and Napoleon Bonaparte.

Victorian Robots 

The Industrial Revolution and the increased focus on mathematics, engineering and science in England in the Victorian age added to the momentum towards actual robotics. Charles Babbage (1791-1871) worked to develop the foundations of computer science in the early-to-mid nineteenth century, his most successful projects being the difference engine and the analytical engine. Although never completed due to lack of funds, these two machines laid out the basics for mechanical calculations. Others such as Ada Lovelace recognized the future possibility of computers creating images or playing music.

Automata continued to provide entertainment during the 19th century, but coterminous with this period was the development of steam-powered machines and engines that helped to make manufacturing much more efficient and quick. Factories began to employ machines to either increase work loads or precision in the production of many products. 

The 20th Century to Today

 In 1920, Karel Capek published his play R.U.R. (Rossum’s Universal Robots), which introduced the word “robot.” It was taken from an old Slavic word that meant something akin to “monotonous or forced labor.” However, it was thirty years before the first industrial robot went to work. In the 1950s, George Devol designed the Unimate, a robotic arm device that transported die castings in a General Motors plant in New Jersey, which started work in 1961. Unimation, the company Devol founded with robotic entrepreneur Joseph Engelberger, was the first robot manufacturing company. The robot was originally seen as a curiosity, to the extent that it even appeared on The Tonight Show in 1966. Soon, robotics began to develop into another tool in the industrial manufacturing arsenal.

Robotics became a burgeoning science and more money was invested. Robots spread to Japan, South Korea and many parts of Europe over the last half century, to the extent that projections for the 2011 population of industrial robots are around 1.2 million. Additionally, robots have found a place in other spheres, as toys and entertainment, military weapons, search and rescue assistants, and many other jobs. Essentially, as programming and technology improve, robots find their way into many jobs that in the past have been too dangerous, dull or impossible for humans to achieve. Indeed, robots are being launched into space to complete the next stages of extraterrestrial and extrasolar research.

Super Computer


Supercomputing is all about pushing out the leading edge of computer speed and performance. The sports metaphors that arise as research sites compete to create the fastest supercomputer sometimes obscure the goal of crunching numbers that had previously been uncrunchable -- and thereby providing information that had previously been inaccessible.

Supercomputers have been used for weather forecasting, fluid dynamics (such as modeling air flow around airplanes or automobiles) and simulations of nuclear explosions -- applications with vast numbers of variables and equations that have to be solved or integrated numerically through an almost incomprehensible number of steps, or probabilistically by Monte Carlo sampling.

The first machine generally referred to as a supercomputer (though not officially designated as one), the IBM Naval Ordnance Research Calculator, was used at Columbia University from 1954 to 1963 to calculate missile trajectories. It predated microprocessors, had a clock speed of 1 microsecond and was able to perform about 15,000 operations per second.

[ Source Receive the latest news on Linux and open source in Computerworld's Linux & Open Source newsletter ]
About half a century later, the latest entry to the world of supercomputers, IBM's Blue Gene/L at Lawrence Livermore National Laboratory, will have 131,072 microprocessors when fully assembled and was clocked at 135.3 trillion floating-point operations per second (TFLOPS) in March.

The computer at Livermore will be used for nuclear weapons simulations. The Blue Gene family will also be used for biochemical applications, reflecting shifts in scientific focus, making intricate calculations to simulate protein folding specified by genetic codes.

The early history of supercomputers is closely associated with Seymour Cray, who designed the first officially designated supercomputers for Control Data in the late 1960s. His first design, the CDC 6600, had a pipelined scalar architecture and used the RISC instruction set that his team developed. In this architecture, a single CPU overlaps fetching, decoding and executing instructions to process one instruction each clock cycle.

Cray pushed the number-crunching speed available from the pipelined scalar architecture with the CDC 7600 before developing a four-processor architecture with the CDC 8600. Multiple processors, however, raised operating system and software issues.

When Cray left CDC in 1972 to start his own company, Cray Research he abandoned the multiprocessor architecture in favour of vector processing, a split that divides supercomputing camps to this day.

Cray Research pursued vector processing, in which hardware was designed to unwrap "for" or "do" loops. Using a CDC 6600, the European Centre for Medium-Range Weather Forecasts (ECMWF) produced a 10-day forecast in 12 days. But using one of Cray Research's first products, the Cray 1-A, the ECMWF was able to produce a 10-day forecast in five hours.

National Pride

Throughout their early history, supercomputers remained the province of large government agencies and government-funded institutions. The production runs of supercomputers were small, and their export was carefully controlled, since they were used in critical nuclear weapons research. They were also a source of national pride, symbolic of technical leadership.

So when the US National Science Foundation (NSF) decided in 1996 to buy a Japanese-made NEC supercomputer for its Colorado weather-research center, the decision was seen as another nail in the coffin of US technological greatness. Antidumping legislation was brought to bear against the importation of Japanese supercomputers, which were and still are based on improvements on vector processing.

But within two years of the NSF's decision, the supercomputing landscape changed. The antidumping decision was revoked. And the ban on exporting supercomputers to nuclear-capable nations was also partially rescinded. What had happened?

For one thing, microprocessor speeds found on desktops had overtaken the computing power of yesteryear's supercomputers. Video games were using the kind of processing power that had previously been available only in government laboratories. The first Bush administration defined supercomputers as being able to perform more than 195 million theoretical operations per second (MTOPS). By 1997, ordinary microprocessors were capable of over 450 MTOPS.

Technologists began building distributed and massively parallel supercomputers and were able to tackle the operating system and software problems that had deterred Seymour Cray from multiprocessing 40 years before. Peripheral speeds had increased so that I/O was no longer a bottleneck. High-speed communications made distributed and parallel designs possible.

As a result, vector processing technology may be in eclipse. NEC produced the Earth Simulator in 2002, which uses 5,104 processors and vector technology. According to the Top500 list of supercomputers (, the Simulator achieves 35.86 TFLOPS. IBM's Blue Gene/L, the current leader, is expected to achieve about 200 TFLOPS. It consumes 15 times less power per computation and is about 50 times smaller than previous supercomputers.

As detailed on the Top500 site, the trend in supercomputers is toward clusters of scalar processors running Linux and leveraging the power of off-the-shelf microprocessors, open-source operating systems and 50 years of experience with the middleware needed to pull these elements together.

Flying Cars


Just a decade and a half after the Wright Brothers took off in their airplane over the plains of Kitty Hawk, N.C., in 1903, other pioneering men began chasing the dream of a flying car. There was even one attempt in the 18th century to develop a gliding horse cart, which, to no great surprise, failed. There are nearly 80 patents on file at the United States Patent and Trademark Office for various kinds of flying cars. Some of these have actually flown. Most have not. And all have come up short of reaching the goal of the mass-produced flying car. Here's a look back at a few of the flying cars that distinguished themselves from the pack:
  • Curtiss Autoplane - In 1917, Glenn Curtiss, who could be called the father of the flying car, unveiled the first attempt at such a vehicle. His aluminum Autoplane sported three wings that spanned 40 feet (12.2 meters). The car's motor drove a four-bladed propeller at the rear of the car. The Autoplane never truly flew, but it did manage a few short hops.
  • Arrowbile - Developed by Waldo Waterman in 1937, the Arrowbile was a hybrid Studebaker-aircraft. Like the Autoplane, it too had a propeller attached to the rear of the vehicle. The three-wheeled car was powered by a typical 100-horsepower Studebaker engine. The wings detached for storage. A lack of funding killed the project.
  • Airphibian - Robert Fulton, who was a distant relative of the steam engine inventor, developed theAirphibian in 1946. Instead of adapting a car for flying, Fulton adapted a plane for the road. The wings and tail section of the plane could be removed to accommodate road travel, and the propeller could be stored inside the plane's fuselage. It took only five minutes to convert the plane into a car. The Airphibian was the first flying car to be certified by the Civil Aeronautics Administration, the predecessor of the the Federal Aviation Administration (FAA). It had a 150-horsepower, six-cylinder engine and could fly 120 miles per hour and drive at 50 mph. Despite his success, Fulton couldn't find a reliable financial backer for the Airphibian.
  • ConvAirCar - In the 1940s, Consolidated-Vultee developed a two-door sedan equipped with a detachable airplane unit. The ConvAirCar debuted in 1947, and offered one hour of flight and a gas mileage of 45 miles (72 kilometers) per gallon. Plans to market the car ended when it crashed on its third flight.
  • Avrocar - The first flying car designed for military use was the Avrocar, developed in a joint effort between Canadian and British military. The flying-saucer-like vehicle was supposed to be a lightweight air carrier that would move troops to the battlefield.
  • Aerocar - Inspired by the Airphibian and Robert Fulton, whom he had met years before, Moulton "Molt" Taylor created perhaps the most well-known and most successful flying car to date. The Aerocar was designed to drive, fly and then drive again without interruption. Taylor covered his car with a fiberglass shell. A 10-foot-long (3-meter) drive shaft connected the engine to a pusher propeller. It cruised at 120 mph (193 kph) in the air and was the second and last roadable aircraft to receive FAA approval. In 1970, Ford Motor Co. even considered marketing the vehicle, but the decade's oil crisis dashed those plans
These pioneers never managed to develop a viable flying car, and some even died testing their inventions. However, they proved that a car could be built to fly, and inspired a new group of roadable aircraft enthusiasts. With advances in lightweight material, computer modeling and computer-controlled aircraft, the dream is very close to becoming reality. In the next section, we will look at the flying cars being developed today that eventually could be in our garages.

Aeronautical Engineering

Aeronautical engineering:

The roots of aeronautical engineering can be traced to the early days of mechanical engineering, to inventors’ concepts, and to the initial studies of aerodynamics, a branch of theoretical physics. The earliest sketches of flight vehicles were drawn by Leonardo da Vinci, who suggested two ideas for sustentation. The first was an ornithopter, a flying machine using flapping wings to imitate the flight of birds. The second idea was an aerial screw, the predecessor of the helicopter. Manned flight was first achieved in 1783, in a hot-air balloon designed by the French brothers Joseph-Michel and Jacques-Étienne Montgolfier. Aerodynamics became a factor in balloon flight when a propulsion system was considered for forward movement. Benjamin Franklin was one of the first to propose such an idea, which led to the development of the dirigible. The power-driven balloon was invented by Henri Gifford, a Frenchman, in 1852. The invention of lighter-than-air vehicles occurred independently of the development of aircraft. The breakthrough in aircraft development came in 1799 when Sir George Cayley, an English baron, drew an airplane incorporating a fixed wing for lift, an empennage (consisting of horizontal and vertical tail surfaces for stability and control), and a separate propulsion system. Because engine development was virtually nonexistent, Cayley turned to gliders, building the first successful one in 1849. Gliding flights established a data base for aerodynamics and aircraft design. Otto Lilienthal, a German scientist, recorded more than 2,000 glides in a five-year period, beginning in 1891. Lilienthal’s work was followed by the American aeronaut Octave Chanute, a friend of the American brothers Orville and Wilbur Wright, the fathers of modern manned flight.

Following the first sustained flight of a heavier-than-air vehicle in 1903, the Wright brothers refined their design, eventually selling airplanes to the U.S. Army. The first major impetus to aircraft development occurred during World War I, when aircraft were designed and constructed for specific military missions, including fighter attack, bombing, and reconnaissance. The end of the war marked the decline of military high-technology aircraft and the rise of civil air transportation. Many advances in the civil sector were due to technologies gained in developing military and racing aircraft. A successful military design that found many civil applications was the U.S. Navy Curtiss NC-4 flying boat, powered by four 400-horsepower V-12 Liberty engines. It was the British, however, who paved the way in civil aviation in 1920 with a 12-passenger Handley-Page transport. Aviation boomed after Charles A. Lindbergh’s solo flight across the Atlantic Ocean in 1927. Advances in metallurgy led to improved strength-to-weight ratios and, coupled with a monocoque design, enabled aircraft to fly farther and faster. Hugo Junkers, a German, built the first all-metal monoplane in 1910, but the design was not accepted until 1933, when the Boeing 247-D entered service. The twin-engine design of the latter established the foundation of modern air transport.

The advent of the turbine-powered airplane dramatically changed the air transportation industry. Germany and Britain were concurrently developing the jet engine, but it was a German Heinkel He 178 that made the first jet flight on Aug. 27, 1939. Even though World War II accelerated the growth of the airplane, the jet aircraft was not introduced into service until 1944, when the British Gloster Meteor became operational, shortly followed by the German Me 262. The first practical American jet was the Lockheed F-80, which entered service in 1945.

A.P.J. Abdul Kalam (president of India)
Alexander M. Lippisch (German-American aerodynamicist)
B.J. Habibie (president of Indonesia)
Ben R. Rich (American engineer)
Bruce McCandless (American naval aviator and astronaut)
Burt Rutan (American aircraft and spacecraft designer)
Charles Lanier Lawrance (American aeronautical engineer)
Charles Stark Draper (American engineer)
Daniel Saul Goldin (American engineer)
Eugen Sänger (Austrian engineer)
George Michael Low (Austrian-born American aerospace engineer)
Georgy Ivanov (Bulgarian cosmonaut)
Hermann Oberth (German scientist)
Hugo Eckener (German aeronautical engineer)
Igor Sikorsky (Russian-American engineer)
Jean-Felix Piccard (American chemical engineer)
Jerome C. Hunsaker (American aeronautical engineer)
Juan de la Cierva (Spanish engineer)
Kelly Johnson (American aeronautical engineer)
Konstantin Eduardovich Tsiolkovsky (Soviet scientist)
Lawrence Hargrave (British aeronautical engineer)
Marcel Dassault (French industrialist)
Max Faget (American engineer)
Michael Griffin (American aerospace engineer)
Octave Chanute (American engineer)
Otto Lilienthal (German aeronautical engineer)
Paul Cornu (French engineer)
Percy Sinclair Pilcher (British engineer)
Qian Xuesen (Chinese scientist)
Robert C. Seamans, Jr. (American aeronautical engineer)
Robert Hutchings Goddard (American scientist)
Samuel Kurtz Hoffman (American engineer)
Samuel Pierpont Langley (American engineer)
Sergey Pavlovich Korolyov (Soviet scientist)
Sergey Vladimirovich Ilyushin (Soviet aircraft designer)
Sir Barnes Wallis (British military engineer)
Theodore von Kármán (American engineer)
Valentin Petrovich Glushko (Soviet scientist)
Vladimir Nikolayevich Chelomey (Soviet scientist)
Walter Robert Dornberger (German engineer)
Wernher von Braun (German-born American engineer)
William Hayward Pickering (American engineer and physicist)
Willy Messerschmitt (German engineer)
Commercial aircraft after World War II continued to use the more economical propeller method of propulsion. The efficiency of the jet engine was increased, and in 1949 the British de Havilland Comet inaugurated commercial jet transport flight. The Comet, however, experienced structural failures that curtailed the service, and it was not until 1958 that the highly successful Boeing 707 jet transport began nonstop transatlantic flights. While civil aircraft designs utilize most new technological advancements, the transport and general aviation configurations have changed only slightly since 1960. Because of escalating fuel and hardware prices, the development of civil aircraft has been dominated by the need for economical operation.

Technological improvements in propulsion, materials, avionics, and stability and controls have enabled aircraft to grow in size, carrying more cargo faster and over longer distances. While aircraft are becoming safer and more efficient, they are also now very complex. Today’s commercial aircraft are among the most sophisticated engineering achievements of the day.

Smaller, more fuel-efficient airliners are being developed. The use of turbine engines in light general aviation and commuter aircraft is being explored, along with more efficient propulsion systems, such as the propfan concept. Using satellite communication signals, onboard microcomputers can provide more accurate vehicle navigation and collision-avoidance systems. Digital electronics coupled with servo mechanisms can increase efficiency by providing active stability augmentation of control systems. New composite materials providing greater weight reduction; inexpensive one-man, lightweight, noncertified aircraft, referred to as ultralights; and alternate fuels such as ethanol, methanol, synthetic fuel from shale deposits and coal, and liquid hydrogen are all being explored. Aircraft designed for vertical and short takeoff and landing, which can land on runways one-tenth the normal length, are being developed. Hybrid vehicles such as the Bell XV-15 tilt-rotor already combine the vertical and hover capabilities of the helicopter with the speed and efficiency of the airplane. Although environmental restrictions and high operating costs have limited the success of the supersonic civil transport, the appeal of reduced traveling time justifies the examination of a second generation of supersonic aircraft.
Support : Creating Website | Johny Template | Mas Template
Copyright © 2011. LATEST TECHNOLOGY - All Rights Reserved
Template Created by Creating Website Inspired by Sportapolis
Proudly powered by Blogger