Search This Blog

Pages

Powered By Blogger
Privacy Policy . Powered by Blogger.

Social Icons

Popular Posts

Followers

Featured Posts

Blogroll

Virus

Thursday 30 May 2013

Viruses have existed as long as life has been on earth.
Early references to viruses

Early references to viral infections include Homer’s mention of “rabid dogs”. Rabies is caused by a virus affecting dogs. This was also known in Mesopotamia.

Polio is also caused by a virus. It leads to paralysis of the lower limbs. Polio may also be witnessed in drawings from ancient Egypt.

In addition, small pox caused by a virus that is now eradicated from the world also has a significant role in history of S. and Central America.
Virology – the study of viruses

The study of viruses is called virology. Experiments on virology began with the experiments of Jenner in 1798. Jenner did not know the cause but found that that individuals exposed to cow pox did not suffer from small pox.

He began the first known form of vaccination with cow pox infection that prevented small pox infection in individuals. He had not yet found the causative organism or the cause of the immunity as yet for either cow pox or small pox.
Koch and Henle

Koch and Henle founded their postulates on microbiology of disease. This included that:
the organism must regularly be found in the lesions of the disease
it must be isolated from diseased host and grown in pure culture
inoculation of such a pure organism into a host should initiate the disease and should be recovered from the secondarily infected organism as well

Viruses do not confer to all of these postulates.
Louis Pasteur

In 1881-1885 Louis Pasteur first used animals as model for growing and studying viruses. He found that the rabies virus could be cultured in rabbit brains and discovered the rabies vaccine. However, Pasteur did not try to identify the infectious agent.
The discovery of viruses

1886-1903 – This period was the discovery period where the viruses were actually found. Ivanowski observed/looked for bacteria like substance and in 1898, Beijerink demonstrated filterable characteristic of the virus and found that the virus is an obligate parasite. This means that the virus is unable to live on its own.
Charles Chamberland and filterable agents

In 1884, the French microbiologist Charles Chamberland invented a filter with pores smaller than bacteria. Chamberland filter-candles of unglazed porcelain or made of diatomaceous earth (clay)-kieselguhr had been invented for water purification. These filters retained bacterium, and had a pore size of 0.1-0.5 micron. Viruses were filtered through these and called “filterable” organisms. Loeffler and Frosch (1898) reported that the infectious agent of foot and mouth diseases virus was a filterable agent.

In 1900 first human disease shown to be caused by a filterable agent was Yellow Fever by Walter Reed. He found the yellow fever virus present in blood of patients during the fever phase. He also found that the virus spread via mosquitoes. In 1853 there was an epidemic in New Orleans and the rate of mortality from this infection was as high as 28%. Infectivity was controlled by destroying mosquito populations
Trapping viruses

In the 1930's Elford developed collodion membranes that could trap the viruses and found that viruses had a size of 1 nano meter. In 1908, Ellerman and Bang demonstrated that certain types of tumors (leukemia of chicken) were caused by viruses. In 1911 Peyton Rous discovered that non-cellular agents like viruses could spread solid tumors. This was termed Rous Sarcoma virus (RSV).
Bacteriophages

The most important discovery was that of the Bacteriophage era. In 1915 Twort was working with vaccinia virus and found that the viruses grew in cultures of bacteria. He called then bacteriophage. Twort abandoned this work after World War I. In 1917, D'Herelle, a Canadian, also found similar bacteriophages.
Images of viruses

In 1931 the German engineers Ernst Ruska and Max Knoll found electron microscopy that enabled the first images of viruses. In 1935, American biochemist and virologist Wendell Stanley examined the tobacco mosaic virus and found it to be mostly made from protein. A short time later, this virus was separated into protein and RNA parts. Tobacco mosaic virus was the first one to be crystallised and whose structure could therefore be elucidated in detail.
Molecular biology

Between 1938 and 1970 virology developed by leaps and bounds into Molecular biology. The 1940's and 1950's was the era of the Bacteriophage and the animal virus.

Delbruck considered father of modern molecular biology. He developed the concepts of virology in the science. In 1952 Hershey and Chase showed that it was the nucleic acid portion that was responsible for the infectivity and carried the genetic material.

In 1954 Watson and Crick found the exact structure of DNA. Lwoff in 1949 found that virus could behave like a bacterial gene on the chromosome and also found the operon model for gene induction and repression. Lwoff in 1957 defined viruses as potentially pathogenic entities with an infectious phase and having only one type of nucleic acid, multiplying with their genetic material and unable to undergo binary fission.

In 1931, American pathologist Ernest William Goodpasture grew influenza and several other viruses in fertilised chickens' eggs. In 1949, John F. Enders, Thomas Weller, and Frederick Robbins grew polio virus in cultured human embryo cells, the first virus to be grown without using solid animal tissue or eggs. This enabled Jonas Salk to make an effective polio vaccine.

Era of polio research was next and was very important as in 1953 the Salk vaccine was introduced and by 1955 poliovirus had been crystallized. Later Sabin introduced attenuated polio vaccine.

In the 1980’s cloning of viral genes developed, sequencing of the viral genomes was successful and production of hybridomas was a reality. The AIDS virus HIV came next in the 1980’s. Further uses of viruses in gene therapy developed over the next two decades.

Solar Energy

Sunday 19 May 2013


The Basics:

Solar energy technologies convert the sun’s light into usable electricity or heat. Solar energy systems can be divided into two major categories: photovoltaic and thermal. Photovoltaic cells produce electricity directly, while solar thermal systems produce heat for buildings, industrial processes or domestic hot water. Thermal systems can also generate electricity by operating heat engines or by producing steam to spin electric turbines. Solar energy systems have no fuel costs, so most of their cost comes from the original investment in the equipment. The total installed costs of solar applications vary depending on the type of financing used. Solar photovoltaics generally range from $6-$10 per watt installed, or $12,000-$30,000 for a typical 2-3 kilowatt residential-scale system. A solar hot water system sized for a typical home is much cheaper and costs between $3,500 and $8,000 depending on the size and type of the system (above prices exclude any incentives or rebates). 
 
Resource Potential:

The Northwest receives more than enough sunlight to meet our entire energy needs for the foreseeable future. As the map above illustrates, the Northwest’s highest potential is in southeastern Oregon and southern Idaho; however, there are no “bad” solar sites—even the rainiest parts of the Northwest receive almost half as much solar energy as the deserts of California and Arizona, and they receive more than Germany, which has made itself a solar energy leader.
 
Photovoltaic Cells:

Photovoltaics (PVs) convert sunlight directly into electricity, using semiconductors made from silicon or other materials. Photovoltaic modules mounted on homes in the Northwest can produce electricity at a levelized cost of 20-60 cents per kilowatt-hour (kWh) before incentives. Incentives can bring the levelized cost down considerably to 10-20 cents per kWh.
 
PVs generate power on a much smaller scale than traditional utility power plants, so they can often provide high-value electricity exactly where and when it is needed. PVs are often the best choice for supplying power for remote, “off-grid” sites or in situations where the transmission or distribution system would otherwise need to be upgraded in order to meet peak demands. Distribution line extensions of more than half a mile are generally more expensive than investing in a PV system for a typical home.
 
Other cost-effective PV applications include building-integrated power generation, meeting high summer demand for electricity (e.g., air conditioning), pumping water, lighting signs and powering equipment used for communications, safety or signaling.
 
Prices for photovoltaics are falling as markets expand. Solar PV demand has grown consistently by 20-25% per year over the past 20 years while solar cell prices fell from $27 per watt of capacity in 1982 to less than $4 per watt today.
 
Direct Thermal:

Direct-use thermal systems are usually located on individual buildings, where they use solar energy directly as a source of heat. The most common systems use sunlight to heat water for houses or swimming pools, or use collector systems or passive solar architecture to heat living and working spaces. These systems can replace electric heating for as little as three cents per kilowatt-hour, and utility and state incentives reduce the costs even further in some cases.
 
Environmental Impacts:

Solar power is an extremely clean way to generate electricity. There are no air emissions associated with the operation of solar modules or direct application technologies. Residential-scale passive construction, photovoltaic, solar water heating, and other direct applications reduce power generation from traditional sources and the associated environmental impacts.
 
Net Metering:

Utilities in all four Northwestern states offer net metering programs, which make it easy for customers to install solar electric systems at their homes or businesses. In a net metering program, customers feed extra power generated by their solar equipment during the day into the utility’s electrical grid for distribution to other customers. Then, at night or other times when the customer needs more power than their system generates, the building draws power back from the utility grid.
 
Net metering allows customers to install solar equipment without the need for expensive storage systems, and without wasting extra power generated when sunlight is at its peak. Such programs also provide a simple, standardized way for customers to use solar systems while retaining access to utility-supplied power.
 
In most net metering programs, the utility installs a special ‘dual-reading’ meter at the customers building which keeps track of both energy consumed by the building, and energy generated by the solar array. The customer is billed only for the net amount of electricity that they draw from the utility, effectively receiving the utility’s full retail price for the electricity they generated themselves.
 
Annual U.S. Solar Installations
by Technology:

Source: Interstate Renewable Energy Council 6
 
Net metering is available from utilities throughout Oregon and Washington, and law requires most Montana utilities to offer it as well. Additionally, Idaho Power and Rocky Mountain Power offer net metering in Idaho in accord with a Public Utilities Commission rule.
 
Incentive Programs in the Northwest:

Every state in the Northwest offers incentives for solar energy development. Oregon, Idaho and Montana all offer low-interest loans and substantial tax credits for solar systems bought by businesses, individuals or governments. Washington now offers a production incentive of $0.15/kilowatt-hour or more for electricity from solar energy, depending on where the technology was manufactured. Montana and Oregon also exempt solar systems from property tax assessment, while Idaho and Washington exempt solar system purchases from sales taxes. Many local utilities and regional organizations also provide incentives. For example, the Energy Trust of Oregon offers additional rebates and loans to customers of Oregon’s two largest utilities and many utilities offer additional rebates, loans, or production incentives for solar energy systems.

Robotics

Saturday 18 May 2013


Robotics:

robotics

Although the science of robotics only came about in the 20th century, the history of human-invented automation has a much lengthier past. In fact, the ancient Greek engineer Hero of Alexandria, produced two texts, Pneumatica and Automata, that testify to the existence of hundreds of different kinds of “wonder” machines capable of automated movement. Of course, robotics in the 20th and 21st centuries has advanced radically to include machines capable of assembling other machines and even robots that can be mistaken for human beings.

The word robotics was inadvertently coined by science fiction author Isaac Asimov in his 1941 story “Liar!” Science fiction authors throughout history have been interested in man’s capability of producing self-motivating machines and lifeforms, from the ancient Greek myth of Pygmalion to Mary Shelley’s Dr. Frankenstein and Arthur C. Clarke’s HAL 9000. Essentially, a robot is a re-programmable machine that is capable of movement in the completion of a task. Robots use special coding that differentiates them from other machines and machine tools, such as CNC. Robots have found uses in a wide variety of industries due to their robust resistance capabilities and precision function. 


Historical Robotics 


Many sources attest to the popularity of automatons in ancient and Medieval times. Ancient Greeks and Romans developed simple automatons for use as tools, toys, and as part of religious ceremonies. Predating modern robots in industry, the Greek God Hephaestus was supposed to have built automatons to work for him in a workshop. Unfortunately, none of the early automatons are extant. 

In the Middle Ages, in both Europe and the Middle East, automatons were popular as part of clocks and religious worship. The Arab polymath Al-Jazari (1136-1206) left texts describing and illustrating his various mechanical devices, including a large elephant clock that moved and sounded at the hour, a musical robot band and a waitress automaton that served drinks. In Europe, there is an automaton monk extant that kisses the cross in its hands. Many other automata were created that showed moving animals and humanoid figures that operated on simple cam systems, but in the 18th century, automata were understood well enough and technology advanced to the point where much more complex pieces could be made. French engineer Jacques de Vaucanson is credited with creating the first successful biomechanical automaton, a human figure that plays a flute. Automata were so popular that they traveled Europe entertaining heads of state such as Frederick the Great and Napoleon Bonaparte.


Victorian Robots 


The Industrial Revolution and the increased focus on mathematics, engineering and science in England in the Victorian age added to the momentum towards actual robotics. Charles Babbage (1791-1871) worked to develop the foundations of computer science in the early-to-mid nineteenth century, his most successful projects being the difference engine and the analytical engine. Although never completed due to lack of funds, these two machines laid out the basics for mechanical calculations. Others such as Ada Lovelace recognized the future possibility of computers creating images or playing music.

Automata continued to provide entertainment during the 19th century, but coterminous with this period was the development of steam-powered machines and engines that helped to make manufacturing much more efficient and quick. Factories began to employ machines to either increase work loads or precision in the production of many products. 

The 20th Century to Today

 In 1920, Karel Capek published his play R.U.R. (Rossum’s Universal Robots), which introduced the word “robot.” It was taken from an old Slavic word that meant something akin to “monotonous or forced labor.” However, it was thirty years before the first industrial robot went to work. In the 1950s, George Devol designed the Unimate, a robotic arm device that transported die castings in a General Motors plant in New Jersey, which started work in 1961. Unimation, the company Devol founded with robotic entrepreneur Joseph Engelberger, was the first robot manufacturing company. The robot was originally seen as a curiosity, to the extent that it even appeared on The Tonight Show in 1966. Soon, robotics began to develop into another tool in the industrial manufacturing arsenal.


Robotics became a burgeoning science and more money was invested. Robots spread to Japan, South Korea and many parts of Europe over the last half century, to the extent that projections for the 2011 population of industrial robots are around 1.2 million. Additionally, robots have found a place in other spheres, as toys and entertainment, military weapons, search and rescue assistants, and many other jobs. Essentially, as programming and technology improve, robots find their way into many jobs that in the past have been too dangerous, dull or impossible for humans to achieve. Indeed, robots are being launched into space to complete the next stages of extraterrestrial and extrasolar research.

Super Computer

SUPER COMPUTER:






Supercomputing is all about pushing out the leading edge of computer speed and performance. The sports metaphors that arise as research sites compete to create the fastest supercomputer sometimes obscure the goal of crunching numbers that had previously been uncrunchable -- and thereby providing information that had previously been inaccessible.

Supercomputers have been used for weather forecasting, fluid dynamics (such as modeling air flow around airplanes or automobiles) and simulations of nuclear explosions -- applications with vast numbers of variables and equations that have to be solved or integrated numerically through an almost incomprehensible number of steps, or probabilistically by Monte Carlo sampling.

The first machine generally referred to as a supercomputer (though not officially designated as one), the IBM Naval Ordnance Research Calculator, was used at Columbia University from 1954 to 1963 to calculate missile trajectories. It predated microprocessors, had a clock speed of 1 microsecond and was able to perform about 15,000 operations per second.

[ Source Receive the latest news on Linux and open source in Computerworld's Linux & Open Source newsletter ]
About half a century later, the latest entry to the world of supercomputers, IBM's Blue Gene/L at Lawrence Livermore National Laboratory, will have 131,072 microprocessors when fully assembled and was clocked at 135.3 trillion floating-point operations per second (TFLOPS) in March.

The computer at Livermore will be used for nuclear weapons simulations. The Blue Gene family will also be used for biochemical applications, reflecting shifts in scientific focus, making intricate calculations to simulate protein folding specified by genetic codes.

The early history of supercomputers is closely associated with Seymour Cray, who designed the first officially designated supercomputers for Control Data in the late 1960s. His first design, the CDC 6600, had a pipelined scalar architecture and used the RISC instruction set that his team developed. In this architecture, a single CPU overlaps fetching, decoding and executing instructions to process one instruction each clock cycle.

Cray pushed the number-crunching speed available from the pipelined scalar architecture with the CDC 7600 before developing a four-processor architecture with the CDC 8600. Multiple processors, however, raised operating system and software issues.

When Cray left CDC in 1972 to start his own company, Cray Research he abandoned the multiprocessor architecture in favour of vector processing, a split that divides supercomputing camps to this day.

Cray Research pursued vector processing, in which hardware was designed to unwrap "for" or "do" loops. Using a CDC 6600, the European Centre for Medium-Range Weather Forecasts (ECMWF) produced a 10-day forecast in 12 days. But using one of Cray Research's first products, the Cray 1-A, the ECMWF was able to produce a 10-day forecast in five hours.

National Pride

Throughout their early history, supercomputers remained the province of large government agencies and government-funded institutions. The production runs of supercomputers were small, and their export was carefully controlled, since they were used in critical nuclear weapons research. They were also a source of national pride, symbolic of technical leadership.

So when the US National Science Foundation (NSF) decided in 1996 to buy a Japanese-made NEC supercomputer for its Colorado weather-research center, the decision was seen as another nail in the coffin of US technological greatness. Antidumping legislation was brought to bear against the importation of Japanese supercomputers, which were and still are based on improvements on vector processing.

But within two years of the NSF's decision, the supercomputing landscape changed. The antidumping decision was revoked. And the ban on exporting supercomputers to nuclear-capable nations was also partially rescinded. What had happened?

For one thing, microprocessor speeds found on desktops had overtaken the computing power of yesteryear's supercomputers. Video games were using the kind of processing power that had previously been available only in government laboratories. The first Bush administration defined supercomputers as being able to perform more than 195 million theoretical operations per second (MTOPS). By 1997, ordinary microprocessors were capable of over 450 MTOPS.

Technologists began building distributed and massively parallel supercomputers and were able to tackle the operating system and software problems that had deterred Seymour Cray from multiprocessing 40 years before. Peripheral speeds had increased so that I/O was no longer a bottleneck. High-speed communications made distributed and parallel designs possible.

As a result, vector processing technology may be in eclipse. NEC produced the Earth Simulator in 2002, which uses 5,104 processors and vector technology. According to the Top500 list of supercomputers (www.top500.org), the Simulator achieves 35.86 TFLOPS. IBM's Blue Gene/L, the current leader, is expected to achieve about 200 TFLOPS. It consumes 15 times less power per computation and is about 50 times smaller than previous supercomputers.

As detailed on the Top500 site, the trend in supercomputers is toward clusters of scalar processors running Linux and leveraging the power of off-the-shelf microprocessors, open-source operating systems and 50 years of experience with the middleware needed to pull these elements together.

Flying Cars

FLYING CARS:






Just a decade and a half after the Wright Brothers took off in their airplane over the plains of Kitty Hawk, N.C., in 1903, other pioneering men began chasing the dream of a flying car. There was even one attempt in the 18th century to develop a gliding horse cart, which, to no great surprise, failed. There are nearly 80 patents on file at the United States Patent and Trademark Office for various kinds of flying cars. Some of these have actually flown. Most have not. And all have come up short of reaching the goal of the mass-produced flying car. Here's a look back at a few of the flying cars that distinguished themselves from the pack:
  • Curtiss Autoplane - In 1917, Glenn Curtiss, who could be called the father of the flying car, unveiled the first attempt at such a vehicle. His aluminum Autoplane sported three wings that spanned 40 feet (12.2 meters). The car's motor drove a four-bladed propeller at the rear of the car. The Autoplane never truly flew, but it did manage a few short hops.
  • Arrowbile - Developed by Waldo Waterman in 1937, the Arrowbile was a hybrid Studebaker-aircraft. Like the Autoplane, it too had a propeller attached to the rear of the vehicle. The three-wheeled car was powered by a typical 100-horsepower Studebaker engine. The wings detached for storage. A lack of funding killed the project.
  • Airphibian - Robert Fulton, who was a distant relative of the steam engine inventor, developed theAirphibian in 1946. Instead of adapting a car for flying, Fulton adapted a plane for the road. The wings and tail section of the plane could be removed to accommodate road travel, and the propeller could be stored inside the plane's fuselage. It took only five minutes to convert the plane into a car. The Airphibian was the first flying car to be certified by the Civil Aeronautics Administration, the predecessor of the the Federal Aviation Administration (FAA). It had a 150-horsepower, six-cylinder engine and could fly 120 miles per hour and drive at 50 mph. Despite his success, Fulton couldn't find a reliable financial backer for the Airphibian.
  • ConvAirCar - In the 1940s, Consolidated-Vultee developed a two-door sedan equipped with a detachable airplane unit. The ConvAirCar debuted in 1947, and offered one hour of flight and a gas mileage of 45 miles (72 kilometers) per gallon. Plans to market the car ended when it crashed on its third flight.
  • Avrocar - The first flying car designed for military use was the Avrocar, developed in a joint effort between Canadian and British military. The flying-saucer-like vehicle was supposed to be a lightweight air carrier that would move troops to the battlefield.
  • Aerocar - Inspired by the Airphibian and Robert Fulton, whom he had met years before, Moulton "Molt" Taylor created perhaps the most well-known and most successful flying car to date. The Aerocar was designed to drive, fly and then drive again without interruption. Taylor covered his car with a fiberglass shell. A 10-foot-long (3-meter) drive shaft connected the engine to a pusher propeller. It cruised at 120 mph (193 kph) in the air and was the second and last roadable aircraft to receive FAA approval. In 1970, Ford Motor Co. even considered marketing the vehicle, but the decade's oil crisis dashed those plans
These pioneers never managed to develop a viable flying car, and some even died testing their inventions. However, they proved that a car could be built to fly, and inspired a new group of roadable aircraft enthusiasts. With advances in lightweight material, computer modeling and computer-controlled aircraft, the dream is very close to becoming reality. In the next section, we will look at the flying cars being developed today that eventually could be in our garages.

Aeronautical Engineering


Aeronautical engineering:












The roots of aeronautical engineering can be traced to the early days of mechanical engineering, to inventors’ concepts, and to the initial studies of aerodynamics, a branch of theoretical physics. The earliest sketches of flight vehicles were drawn by Leonardo da Vinci, who suggested two ideas for sustentation. The first was an ornithopter, a flying machine using flapping wings to imitate the flight of birds. The second idea was an aerial screw, the predecessor of the helicopter. Manned flight was first achieved in 1783, in a hot-air balloon designed by the French brothers Joseph-Michel and Jacques-Étienne Montgolfier. Aerodynamics became a factor in balloon flight when a propulsion system was considered for forward movement. Benjamin Franklin was one of the first to propose such an idea, which led to the development of the dirigible. The power-driven balloon was invented by Henri Gifford, a Frenchman, in 1852. The invention of lighter-than-air vehicles occurred independently of the development of aircraft. The breakthrough in aircraft development came in 1799 when Sir George Cayley, an English baron, drew an airplane incorporating a fixed wing for lift, an empennage (consisting of horizontal and vertical tail surfaces for stability and control), and a separate propulsion system. Because engine development was virtually nonexistent, Cayley turned to gliders, building the first successful one in 1849. Gliding flights established a data base for aerodynamics and aircraft design. Otto Lilienthal, a German scientist, recorded more than 2,000 glides in a five-year period, beginning in 1891. Lilienthal’s work was followed by the American aeronaut Octave Chanute, a friend of the American brothers Orville and Wilbur Wright, the fathers of modern manned flight.

Following the first sustained flight of a heavier-than-air vehicle in 1903, the Wright brothers refined their design, eventually selling airplanes to the U.S. Army. The first major impetus to aircraft development occurred during World War I, when aircraft were designed and constructed for specific military missions, including fighter attack, bombing, and reconnaissance. The end of the war marked the decline of military high-technology aircraft and the rise of civil air transportation. Many advances in the civil sector were due to technologies gained in developing military and racing aircraft. A successful military design that found many civil applications was the U.S. Navy Curtiss NC-4 flying boat, powered by four 400-horsepower V-12 Liberty engines. It was the British, however, who paved the way in civil aviation in 1920 with a 12-passenger Handley-Page transport. Aviation boomed after Charles A. Lindbergh’s solo flight across the Atlantic Ocean in 1927. Advances in metallurgy led to improved strength-to-weight ratios and, coupled with a monocoque design, enabled aircraft to fly farther and faster. Hugo Junkers, a German, built the first all-metal monoplane in 1910, but the design was not accepted until 1933, when the Boeing 247-D entered service. The twin-engine design of the latter established the foundation of modern air transport.

The advent of the turbine-powered airplane dramatically changed the air transportation industry. Germany and Britain were concurrently developing the jet engine, but it was a German Heinkel He 178 that made the first jet flight on Aug. 27, 1939. Even though World War II accelerated the growth of the airplane, the jet aircraft was not introduced into service until 1944, when the British Gloster Meteor became operational, shortly followed by the German Me 262. The first practical American jet was the Lockheed F-80, which entered service in 1945.

PEOPLE
TOPICS
A.P.J. Abdul Kalam (president of India)
Alexander M. Lippisch (German-American aerodynamicist)
B.J. Habibie (president of Indonesia)
Ben R. Rich (American engineer)
Bruce McCandless (American naval aviator and astronaut)
Burt Rutan (American aircraft and spacecraft designer)
Charles Lanier Lawrance (American aeronautical engineer)
Charles Stark Draper (American engineer)
Daniel Saul Goldin (American engineer)
Eugen Sänger (Austrian engineer)
George Michael Low (Austrian-born American aerospace engineer)
Georgy Ivanov (Bulgarian cosmonaut)
Hermann Oberth (German scientist)
Hugo Eckener (German aeronautical engineer)
Igor Sikorsky (Russian-American engineer)
Jean-Felix Piccard (American chemical engineer)
Jerome C. Hunsaker (American aeronautical engineer)
Juan de la Cierva (Spanish engineer)
Kelly Johnson (American aeronautical engineer)
Konstantin Eduardovich Tsiolkovsky (Soviet scientist)
Lawrence Hargrave (British aeronautical engineer)
Marcel Dassault (French industrialist)
Max Faget (American engineer)
Michael Griffin (American aerospace engineer)
Octave Chanute (American engineer)
Otto Lilienthal (German aeronautical engineer)
Paul Cornu (French engineer)
Percy Sinclair Pilcher (British engineer)
Qian Xuesen (Chinese scientist)
Robert C. Seamans, Jr. (American aeronautical engineer)
Robert Hutchings Goddard (American scientist)
Samuel Kurtz Hoffman (American engineer)
Samuel Pierpont Langley (American engineer)
Sergey Pavlovich Korolyov (Soviet scientist)
Sergey Vladimirovich Ilyushin (Soviet aircraft designer)
Sir Barnes Wallis (British military engineer)
Theodore von Kármán (American engineer)
Valentin Petrovich Glushko (Soviet scientist)
Vladimir Nikolayevich Chelomey (Soviet scientist)
Walter Robert Dornberger (German engineer)
Wernher von Braun (German-born American engineer)
William Hayward Pickering (American engineer and physicist)
Willy Messerschmitt (German engineer)
Commercial aircraft after World War II continued to use the more economical propeller method of propulsion. The efficiency of the jet engine was increased, and in 1949 the British de Havilland Comet inaugurated commercial jet transport flight. The Comet, however, experienced structural failures that curtailed the service, and it was not until 1958 that the highly successful Boeing 707 jet transport began nonstop transatlantic flights. While civil aircraft designs utilize most new technological advancements, the transport and general aviation configurations have changed only slightly since 1960. Because of escalating fuel and hardware prices, the development of civil aircraft has been dominated by the need for economical operation.

Technological improvements in propulsion, materials, avionics, and stability and controls have enabled aircraft to grow in size, carrying more cargo faster and over longer distances. While aircraft are becoming safer and more efficient, they are also now very complex. Today’s commercial aircraft are among the most sophisticated engineering achievements of the day.

Smaller, more fuel-efficient airliners are being developed. The use of turbine engines in light general aviation and commuter aircraft is being explored, along with more efficient propulsion systems, such as the propfan concept. Using satellite communication signals, onboard microcomputers can provide more accurate vehicle navigation and collision-avoidance systems. Digital electronics coupled with servo mechanisms can increase efficiency by providing active stability augmentation of control systems. New composite materials providing greater weight reduction; inexpensive one-man, lightweight, noncertified aircraft, referred to as ultralights; and alternate fuels such as ethanol, methanol, synthetic fuel from shale deposits and coal, and liquid hydrogen are all being explored. Aircraft designed for vertical and short takeoff and landing, which can land on runways one-tenth the normal length, are being developed. Hybrid vehicles such as the Bell XV-15 tilt-rotor already combine the vertical and hover capabilities of the helicopter with the speed and efficiency of the airplane. Although environmental restrictions and high operating costs have limited the success of the supersonic civil transport, the appeal of reduced traveling time justifies the examination of a second generation of supersonic aircraft.

Gasoline

Thursday 16 May 2013


The First Oil Well Was Dug Just Before the Civil War

Edwin Drake dug the first oil well in 1859 and distilled the petroleum to produce kerosene for lighting. Drake had no use for the gasoline or other products, so he discarded them. It wasn't until 1892 with the invention of the automobile that gasoline was recognized as a valuable fuel. By 1920, there were 9 million vehicles on the road powered by gasoline, and service stations were popping up everywhere.
A Field of Dozens of Oil Wells Just Offshore, at Summerland, California (Santa Barbara County) in 1915
Photograph of a Field of Dozens of Oil Wells Just Offshore, at Summerland, California (Santa Barbara County) in 1915

Higher Octane and Lead Levels

By the 1950s, cars were becoming bigger and faster. Octane levels increased and so did lead levels; lead was added to gasoline to improve engine performance.

Leaded Gasoline Was Taken Off the U.S. Market

Unleaded gasoline was introduced in the 1970s, when the health problems from lead became apparent. In the United States, leaded gasoline was completely phased out in the 1980s, but it is still being used in some parts of the world.
Read more about the history of environmental laws that affect gasoline.

Generators


Today, everybody is familiar with electricity. Let's say, almost everybody uses electricity as a ready-for-use energy that is provided in a clean way. This is the result of long research and engineering work which can be traced back for centuries. The first generators of electricity were not electrodynamic as today's machines, but they were based on electrostatic principles. Long before electrodynamic generators were invented, electrostatic machines and devices had their place in science. Due to their principle of operation, electrostatic generators produce high voltage, but low currents. The output is always a unipolar static voltage. Depending from the used materials, it may be positve or negative.
Friction is the key of the operation! Although most mechanical energy needed to power an electrostatic generator is converted into heat, a fraction of the work (not a fraction of friction - got the point?) is used to generate electric potential by splitting charges.

The Beginnings

In ancient greece, the amber was known to attract little objects after being rubbed with cloth or fur. From the Greek expression elektron, the modern term electricsis directly derived. In 1600, William Gilbert (1544-1603) coined the expression electrica in his famous book De Magnete.
In ancient Greece, there was no effort to mechanize the rubbing of a piece of amber in order to get a continous effect. Although light could be observed in the dark, nobody made a connection between this and the lightning which was regarded as Zeus' weapon. The knowledge about this type of electricity remained almost unchanged until the beginning of the seventeenth century. Several antique authors like Pliny the elder or Renaissance men like Giovanni Battista della Porta describe the effect but without drawing further conclusions.

The Sulphur Ball

Otto von Guericke (1602-1686) who became famous for his Magdeburg vacuum experiments invented a first simple electrostatic generator. It was made of a sulphur ball which rotated in a wooden cradle. The ball itself was rubbed by hand. As the principles of electric conduction had not been discovered yet, von Guericke transported the charged sulphur ball to the place where the electric experiment should happen.

von Guericke's first electrostatic generator around 1660
Guericke made the ball by pouring molten sulphur into a hollow glass sphere. After the sulphur was cold, the glass hull was smashed and removed. Some day, a researcher found out that the empty glass sphere itself provided the same results.

A Baroque Gas Discharging Lamp

In 1730 scientific research has discovered the principles of electric conduction. An inspiriation for electric research came from an area which at the first glance had absoluteley nothing to contribute: the mercury barometric device invented by Evangelista Torricelli. If the mercury-filled tube was shaken and the evacuated portion of the tube was observed in the dark, a light emission could be seen. William Hauksbee, both inventive and inquisitive, designed a rotor to rub a small disk of amber in a vacuum chamber. When the chamber contained some mercury vapour, it lit up! This was the first mercury gas discharge lamp! The engravings show surprising similarities to today's lightning spheres.

Hauksbee's amber rotor

Hauksbee's setup to demonstrate
light effects caused by static electricity.

 The Beer Glass Generator

Glass proved to be an ideal material for an electrostatic generator. It was cheaper than sulphur and could easily be shaped to disks or cylinders. An ordinary beer glass turned out to be a good isolating rotor in Winkler's electrostatic machine.

An electrostatic machine invented by
Johann Heinrich Winkler (1703-1770)
Machines like these were not only made for scientific research, but a preferred toy for amusement. In the 18th century, everybody wanted to experience the electric shock. Experiments like the "electric kiss" were a salon pastime. Although the French Abbé Nollet demonstrated in 1745 that little animals like birds and fish were killed instantaneuosly by the discharge of a Leyden jar, nobody was aware of the latent dangers of this type of experiments.

The electric kiss provided a very special thrill
Soon after the effects of electrostatic discharge were found, researchers and charlatans started to cure diseases with electric shocks. Here we find parallels to the "Mesmerists" who claimed to use magnetic powers for therapy.

Toothache therapy around 1750
Being ill at that time was no fun!

 The Leyden Jar

In 1745, the so-called Leyden Jar (or Leyden Bottle) was invented by Ewald Jürgen von Kleist (1700-1748). Kleist searched for a way to store electric energy and had the idea to fill it into a bottle! The bottle contained water or mercury and was placed onto a metal surface with ground connection. No wonder: the device worked, but not because of the fact that electricity could be filled into bottles.One year after Kleist, the physicist Cunnaeus in Leyden/the Netherlands independently invented this bottle again. Thus the term Leyden Jar became more familiar, although in Germany, this device sometimes also was called Kleist's bottle.
An intense research work began to find out which liquid is the most suitable. A few years later, researchers had learned that water is not necessary, but a metal hull inside and outside the jar was sufficient for storing electrostatic energy. Thus the first capacitors were born.
Early Leyden jarsAn advanced electrostatic battery in 1795

Frequently, several jars were connected in order to multiply the charge. Experimenting with this type of capacitors started to become pretty dangerous. In 1783, while trying to charge a battery during a thunderstorm, Prof. Richmann was killed by unintendedly getting too close to a conductor with his head. He is the first known victim of high voltage experiments in the history of physics. Benjamin Franklin had a good deal of luck not to win this honour when performing his kite experiments.
St. Petersburg, 6 August 1783. Prof. Richman and his assistant being struck by lightning while charging capacitors. The assistant escaped almost unharmed, whereas Richman was dead immediately. The pathologic analysis revealed that "he only had a small hole in his forehead, a burnt left shoe and a blue spot at his foot. [...] the brain being ok, the front part of the lung sane, but the rear being brown and black of blood." The conclusion was that the electric discharge had taken its way through Richmann's body. The scientific community was shocked.

 The Disk Rotor

Generators based on disks were invented around 1800 by Winter. Their characteristic construction element is a mercury-prepared leather cushion that covers approximately one forth of the surface area. The leather cushion replaced the experimentor's hand and gave a more continous result. In 1799, first experiments of electrolysis by electrostatic energy were made. It turned out that the recently invented chemical elements caused same or better effect than many thousand electric discharges of a Leyden bottle battery. Experiments like these helped to shape the understanding of electric energy.

An early disk generator by Winter

 The Advanced Rotor

Inventors found out that it is a good idea to laminate metal or cardboard sheets onto the isolating disks of electrostatic generators.

The so-called influence machine by Holtz, 1865
Disks for advanced generators of this type were made of glass, shellac and ebonite (hard rubber). Especially hard rubber turned out to be a very suitable material as it did not get damaged so easily than glass or shellac.

 The Wimshurst Machine

Wimshurst machines are the end point of the long development of electrostatic disk machines. They caused very good results and were frequently used to power X-ray tubes. The characteristic construction element of these machines are disks which are laminated with radially arranged metal sheets. The advantage of disks is that they can be stacked onto one axle in order to multiply the effect.

A Wimshurst machine around 1905.
The end point of a long development.
The invention of the electomagnetic inductor by Ruhmkorff in 1857 began to make the electrostatic disk machines obsolete. Today, both devices only serve as useful demonstration objects in physics lessons to show how electric charges accumulate. For technical applications, high voltages can be easier generated by electronic and electromagnetic methods.

A Ruhmkorff inductor to power an X-ray tube (1910)

 The Van-DeGraaff Generator

The principle of this machine is to transport voltage by the aid of a tape made of isolating flexible material e.g. rubber. Early in the development of machinery, it was observed that mechanical transmission belts gave reason for unintended high voltage production, which harmed persons or buildings by igniting parts of a workshop. The same effect caused by transporting the highly inflammable celluloid films inside the projector was the reason for more than one cinema perishing in fire.

A 5 Megavolt Van-deGraaff generator
The principle is based on an isolating endless tape which transports an electric charge to a conductor. Although the device can be operated without an additional electric power source, normally a DC high voltage is applied to the tape, thus considerably increasing the output voltage. Van DeGraaff generators are still in use in particle accellerator labs. The largest machines produce up to 10 million Volts.

 The Steam Electrostatic Generator

Wet steam which is pressed through a nozzle causes electric chargement. This was the origin of the idea to construct an electrostatic generator based on steam. Although these machines caused good results, they were difficult to maintain. As they also were expensive, comparatively few were built and have survived in museum collections.

A steam electrostatic generator

Conclusion

Electrostatic generators have their place in the history of science. They accompanied the way to understand electricity. However, their efficiency is poor, compared to the mechanical effort which is needed to produce electrical energy. In this context, I'd like to seriously warn all would-be inventors of electrostatic PMMs based on disk rotors or on the Van deGraaf principle. Machines of this type are no toy and even small devices can be dangerous if carelessly handled. As a rule of thumb, a charged Leyden jar of 1/2 liter (=1/8 gallon) volume can endanger your life!

hacking

Wednesday 15 May 2013

Hacking's History :

From phone phreaks to Web attacks, hacking has been a part of computing for 40 years.

Hacking has been around pretty much since the development of the first electronic computers. Here are some of the key events in the last four decades of hacking.
1960sThe Dawn of Hacking
The first computer hackers emerge at MIT. They borrow their name from a term to describe members of a model train group at the school who "hack" the electric trains, tracks, and switches to make them perform faster and differently. A few of the members transfer their curiosity and rigging skills to the new mainframe computing systems being studied and developed on campus.
1970sPhone Phreaks and Cap'n Crunch
Phone hackers (phreaks) break into regional and international phone networks to make free calls. One phreak, John Draper (aka Cap'n Crunch), learns that a toy whistle given away inside Cap'n Crunch cereal generates a 2600-hertz signal, the same high-pitched tone that accesses AT&T's long-distance switching system.
Draper builds a "blue box" that, when used in conjunction with the whistle and sounded into a phone receiver, allows phreaks to make free calls.
Shortly thereafter, Esquire magazine publishes "Secrets of the Little Blue Box" with instructions for making a blue box, and wire fraud in the United States escalates. Among the perpetrators: college kids Steve Wozniak and Steve Jobs, future founders of Apple Computer, who launch a home industry making and selling blue boxes.
1980Hacker Message Boards and Groups
Phone phreaks begin to move into the realm of computer hacking, and the first electronic bulletin board systems (BBSs) spring up.
The precursor to Usenet newsgroups and e-mail, the boards--with names such as Sherwood Forest and Catch-22--become the venue of choice for phreaks and hackers to gossip, trade tips, and share stolen computer passwords and credit card numbers.
Hacking groups begin to form. Among the first are Legion of Doom in the United States, and Chaos Computer Club in Germany.
1983Kids' Games
The movie War Games introduces the public to hacking, and the legend of hackers as cyberheroes (and anti-heroes) is born. The film's main character, played by Matthew Broderick, attempts to crack into a video game manufacturer's computer to play a game, but instead breaks into the military's nuclear combat simulator computer..
The computer (codenamed WOPR, a pun on the military's real system called BURGR) misinterprets the hacker's request to play Global Thermonuclear War as an enemy missile launch. The break-in throws the military into high alert, or Def Con 1 (Defense Condition 1).
The same year, authorities arrest six teenagers known as the 414 gang (after the area code to which they are traced). During a nine-day spree, the gang breaks into some 60 computers, among them computers at the Los Alamos National Laboratory, which helps develop nuclear weapons.
1984Hacker 'Zines
The hacker magazine 2600 begins regular publication, followed a year later by the online 'zine Phrack. The editor of 2600, "Emmanuel Goldstein" (whose real name is Eric Corley), takes his handle from the main character in George Orwell's 1984. Both publications provide tips for would-be hackers and phone phreaks, as well as commentary on the hacker issues of the day. Today, copies of 2600 are sold at most large retail bookstores.
1986Use a Computer, Go to Jail
In the wake of an increasing number of break-ins to government and corporate computers, Congress passes the Computer Fraud and Abuse Act, which makes it a crime to break into computer systems. The law, however, does not cover juveniles.
1988The Morris Worm
Robert T. Morris, Jr., a graduate student at Cornell University and son of a chief scientist at a division of the National Security Agency, launches a self-replicating worm on the government's ARPAnet (precursor to the Internet) to test its effect on UNIX systems.
The worm gets out of hand and spreads to some 6000 networked computers, clogging government and university systems. Morris is dismissed from Cornell, sentenced to three years' probation, and fined $10,000.
1989The Germans and the KGB
In the first cyberespionage case to make international headlines, hackers in West Germany (loosely affiliated with the Chaos Computer Club) are arrested for breaking into U.S. government and corporate computers and selling operating-system source code to the Soviet KGB.
Three of them are turned in by two fellow hacker spies, and a fourth suspected hacker commits suicide when his possible role in the plan is publicized. Because the information stolen is not classified, the hackers are fined and sentenced to probation.
In a separate incident, a hacker is arrested who calls himself The Mentor. He publishes a now-famous treatise that comes to be known as the Hacker's Manifesto. The piece, a defense of hacker antics, begins, "My crime is that of curiosity ... I am a hacker, and this is my manifesto. You may stop this individual, but you can't stop us all."
1990Operation Sundevil
After a prolonged sting investigation, Secret Service agents swoop down on hackers in 14 U.S. cities, conducting early-morning raids and arrests.
The arrests involve organizers and prominent members of BBSs and are aimed at cracking down on credit-card theft and telephone and wire fraud. The result is a breakdown in the hacking community, with members informing on each other in exchange for immunity.
1993Why Buy a Car When You Can Hack One?
During radio station call-in contests, hacker-fugitive Kevin Poulsen and two friends rig the stations' phone systems to let only their calls through, and "win" two Porsches, vacation trips, and $20,000.
Poulsen, already wanted for breaking into phone- company systems, serves five years in prison for computer and wire fraud. (Since his release in 1996, he has worked as a freelance journalist covering computer crime.)
The first Def Con hacking conference takes place in Las Vegas. The conference is meant to be a one-time party to say good-bye to BBSs (now replaced by the Web), but the gathering is so popular it becomes an annual event.
1994Hacking Tools R Us
The Internet begins to take off as a new browser, Netscape Navigator, makes information on the Web more accessible. Hackers take to the new venue quickly, moving all their how-to information and hacking programs from the old BBSs to new hacker Web sites.
As information and easy-to-use tools become available to anyone with Net access, the face of hacking begins to change.
1995The Mitnick Takedown
Serial cybertrespasser Kevin Mitnick is captured by federal agents and charged with stealing 20,000 credit card numbers. He's kept in prison for four years without a trial and becomes a cause célèbre in the hacking underground.
After pleading guilty to seven charges at his trial in March 1999, he's eventually sentenced to little more than time he had already served while he wait for a trial.
Russian crackers siphon $10 million from Citibank and transfer the money to bank accounts around the world. Vladimir Levin, the 30-year-old ringleader, uses his work laptop after hours to transfer the funds to accounts in Finland and Israel.
Levin stands trial in the United States and is sentenced to three years in prison. Authorities recover all but $400,000 of the stolen money.
1997Hacking AOL
AOHell is released, a freeware application that allows a burgeoning community of unskilled hackers--or script kiddies--to wreak havoc on America Online. For days, hundreds of thousands of AOL users find their mailboxes flooded with multi-megabyte mail bombs and their chat rooms disrupted with spam messages.
1998The Cult of Hacking and the Israeli Connection
The hacking group Cult of the Dead Cow releases its Trojan horse program, Back Orifice--a powerful hacking tool--at Def Con. Once a hacker installs the Trojan horse on a machine running Windows 95 or Windows 98, the program allows unauthorized remote access of the machine.
During heightened tensions in the Persian Gulf, hackers touch off a string of break-ins to unclassified Pentagon computers and steal software programs. Then-U.S. Deputy Defense Secretary John Hamre calls it "the most organized and systematic attack" on U.S. military systems to date.
An investigation points to two American teens. A 19-year-old Israeli hacker who calls himself The Analyzer (aka Ehud Tenebaum) is eventually identified as their ringleader and arrested. Today Tenebaum is chief technology officer of a computer consulting firm.
1999Software Security Goes Mainstream
In the wake of Microsoft's Windows 98 release, 1999 becomes a banner year for security (and hacking). Hundreds of advisories and patches are released in response to newfound (and widely publicized) bugs in Windows and other commercial software products. A host of security software vendors release anti-hacking products for use on home computers.
2000Service Denied
In one of the biggest denial-of-service attacks to date, hackers launch attacks against eBay, Yahoo, Amazon, and others.
Activists in Pakistan and the Middle East deface Web sites belonging to the Indian and Israeli governments to protest oppression in Kashmir and Palestine.
Hackers break into Microsoft's corporate network and access source code for the latest versions of Windows and Office.
2001DNS Attack
Microsoft becomes the prominent victim of a new type of hack that attacks the domain name server. In these denial-of-service attacks, the DNS paths that take users to Microsoft's Web sites are corrupted. The hack is detected within a few hours, but prevents millions of users from reaching Microsoft Web pages for two days.
 
Support : Creating Website | Johny Template | Mas Template
Copyright © 2011. LATEST TECHNOLOGY - All Rights Reserved
Template Created by Creating Website Inspired by Sportapolis Shape5.com
Proudly powered by Blogger