Print Email Download Reference This Send to Kindle Reddit This
submit to reddit

How The Atom And Atomic Physics Arose Philosophy Essay

The History, Nature, Practice of Atomic Physics. Imre Lakatos (1976) posited that no theorem of mathematics is final or perfect. Once an exception is found, then the theory is adjusted to accommodate this new information. He proposed explaining mathematical knowledge based on the idea of heuristics; that is, ignoring whether a problem can be proven correct, but rather adopting a “good solution,” albeit sometimes sacrificing accuracy or precision. He essentially is describing the heuristic model, which will be applied in this chronology from atom to atomic theory. Algorithms are developed to describe a process, and subsequently modified to incorporate new technological knowledge.

The idea behind the atom goes back to the Ancient Greeks who believed that all matter was made of smaller, more fundamental things. In 460 BC, Greek philosopher, Democritus, develop the idea of atoms. He asked what would happen if you break a piece of something in half, and in half again, and so on and so forth: how many times would you have to break it before it can no longer be broken into a smaller piece. He called this small, indivisible piece, atom (άτομο) (Freeman, 1948). Unfortunately, the philosophers of that period, particularly Aristotle, dismissed his ideas as worthless (Freeman, 1948). Subsequently, there was no further interest in the atom until 1803 when John Dalton proposed what he called his atomic theory.

Dalton concurred with Democritus’ hypothesis of the immutability of the atom, and added two further hypotheses, specifically that atoms of different elements had different weights—which rejected Newton’s theory of chemical affinities, and that three different types of atoms exist, which he labeled “simple,” “compound,” and “complex.” (Greenaway, 1966). In his further work, he posited that atoms could be neither created nor destroyed, and atoms only combine in small, whole number ratios such as 1:1, 1:2, 2:3 and so on (Greenaway, 1966).

In 1897, Thomson discovered the electron and proposed a model for the structure of the atom. He posited that electrons are kept in position by electrostatic forces (Thomson, 1904). He suggested that these electrons were arranged as in a “plum pudding;” that is, each atom was a sphere filled with a positively charged fluid. The fluid was called the "pudding." Scattered in this fluid were electrons known as the "plums." The radius of the model was 10-10 meters (Hentschel, 2009).

In 1900, Planck demonstrated that when you vibrate atoms strong enough, you could measure the energy only in discrete units. He called these energy packets, quanta (Mehra & Rechenberg, 1982). He derived his formula by a statistical analysis of these quanta of energy. Each quantum contained energy directly proportional to a constant, h, multiplied by the frequency of oscillation of the particular blackbody oscillator associated with that quantum. Using a formula that he developed, written as

I(v, T) =

where

I = energy per unit time per unit surface area per unit solid angle per unit frequency or

wavelength;

v = frequency;

T = temperature of the black body;

h = constant (6.62606896(33) x 10-34Js = 4.13566733(10) x10-15 eVs);

c = speed of light;

k = Bolzmann constant (E = ) (Mohr, et al., 2006),

calculating a value for the charge of the electron as well as the constant h. Subsequently, he discovered that because of the finite, non-zero value of h, the world at atomic dimensions could not be explained with classical mechanics (Mehra & Rechenberg, 1982). In 1905, Einstein applied this formula to light and was able to explain photoelectric affect—that is, light absorption could release electrons from atoms. He argued that under certain circumstances light behaves not as continuous waves but as discontinuous, individual particles; that is, quanta (Cassidy, 1998).

In 1905, Einstein published his paper on special relativity. It generalized Galileo’s principle of relativity. He termed it “special” because the theory only applied to frames of reference in unvarying relative motion with respect to each other (Einstein, 2008). In this theory, he expressed two postulates:

The Principle of Relativity – The laws by which the states of physical systems undergo change are not affected, whether these changes of state be referred to the one or the other of two systems in uniform motion relative to each other (Einstein, 2008).

The Principle of Invariant Light Speed – "light is always propagated in empty space with a definite velocity [speed] c which is independent of the state of motion of the emitting body" (Einstein, 2008). That is, light in a vacuum move with the speed c (a fixed constant, independent of direction) in at least one system of inertial coordinates (the "stationary system"), regardless of the state of motion of the light source (Einstein, 2008).

The outcome of this theory are that the time lapse between two events is not invariant from one observer to another, but is dependent on the relative speeds of the observer’s frame of reference (Einstein, 2008). Two events happening in two different locations that occur simultaneously in the reference frame of one observer may occur non-simultaneously in the reference frame of another observer (Einstein, 2008). The length of an object as measured by one observer may be smaller than that measured by another observer (Einstein, 2008). He posited that as an object’s speed approaches the speed of light, an observer would see its mass appear to increase, appearing to make it more difficult to accelerate (Einstein, 2008). The energy content of an object at rest with mass m equals mc2. Conservation of energy implies that in any reaction a decrease of the sum of the masses of particles must be accompanied by an increase in kinetic energies of the particles after the reaction; that is, E = mc2 (Einstein, 2008).

In 1909, Rutherford conducted an experiment where he fired gold foil with helium atom nuclei—alpha () particles. Most of the -particles went straight through, but a few were deflected or bounced back. This led Rutherford to hypothesize that atoms are mostly empty. He posited that the negative electrons orbited around the nucleus of the atom similar to planets around the sun (Goldstein, et al., 2000). In 1919, Rutherford was successful in demonstrating artificial disintegration of a nucleus by firing -particles into nitrogen gas, which resulted in the production of hydrogen (Reeves, 2008).

In 1913, Bohr postulated that electrons can be bumped up to a higher shell if hit by an electron or a photon of light. Classical physics held that the electrons orbiting the nucleus should lose energy until they spiral down into the center, collapsing the atom. Bohr proposed adding to the model the new idea of quanta put forth by Planck. That way, electrons existed at set levels of energy’ that is, at fixed distances from the nucleus. If the atom absorbed energy, the electron jumped to a level further from the nucleus; if it radiated energy, it fell to a level closer to the nucleus (Smirnov, 2003). Sommerfeld hypothesized that the orbits of electrons do not have to be spherical but can also be elliptic. He further posited that the orbits do not have to lie in the same plane: they could be oriented in space on some defined directions (Eisberg & Resnick, 1985).

In 1915, Einstein developed his general theory of relativity. It addressed the issue of gravity. It described the relationship between space-time and energy-momentum. Space-time may be defined as space being three-dimensional, with the additive of time as a fourth dimension, combined in a single continuum. In this theory, Einstein assumed that space-time is curved by the presence of energy (Einstein, 2008).

Pauli, in 1925, developed his exclusion principle (Griffiths, 2004), which states, “no two electrons in the same atom can be in the same quantum state” (Schäfer, 1997). The importance of this principle is that it allows for the distinction between the different elements on the Periodic Table.

In 1926, Schrödinger theorized the concept of wave dynamics, developing a particle wave theory. (Frederic & Levi, 2006). The wave function is not physical because it cannot be measured. In this theory, the thing that is measured is the expected value of the quantum operator. This is based upon a probabilistic function (Frederic & Levi, 2006). His equation is used to describe an electron’s movement through space.

In that same year, Born and Heisenberg developed a theory they called “matrix mechanics” to explain the nature of atoms. Up to this time, quantum theory described the motion of a particle by a classical orbit, with a well defined position and momentum; with the restriction that the time integral over one period of the momentum times the velocity must be a positive integer multiple of Planck’s constant (Born, et al., 1989). By applying matrix mathematics, the position, the momentum, the energy, and all the observable quantities are interpreted as matrices. This was developed on the premise that all observed sequences of physical operations might be represented by matrices whose elements are marked by two different energy levels. If one of these physical operations is measured, the result is a value, with the corresponding vector being the state of the system immediately after this measurement (Born, et al., 1989).

In 1926, Schrödinger theorized the concept of wave dynamics, developing a particle wave theory. (Frederic & Levi, 2006). The wave function is not physical because it cannot be measured. In this theory, the thing that is measured is the expected value of the quantum operator. This is based upon a probabilistic function (Frederic & Levi, 2006). His equation is used to describe an electron’s movement through space.

In 1927, Heisenberg went on further to posit that no experiment could measure the position and momentum of a quantum particle simultaneously. The more precisely one of the factors may be measured, the less precisely the other can be measured. This became known as the "Heisenberg uncertainty principle" (Born, et al., 1989).

In 1932, Heisenberg posited charged particles bounce photons of light back and forth between them, thus providing a way for the electromagnetic forces to act between the particles (Smirnov, 2003). In 1935, Yukawa used Heisenberg's uncertainty principle to explain that a virtual particle could exist for an extremely small fraction of a second (Brown & Jackson, 1976).

In 1938, Hahn and Strassmann first observed nuclear fission. Nuclear fission is a reaction in which the nucleus of an atom splits into smaller parts. In their classic experiment, they discovered barium upon bombarding uranium with neutrons (Hahn & Strassman, 1939). This was seen as a prescient discovery, with practical implications. With news of this discovery spreading throughout the scientific community, Szilárd in 1933 foresaw the potential of causing a nuclear chain reaction (Esterer & Esterer, 1972). Chain reactions were already and understood concept in chemistry and Szilárd envisioned a similar process in physics, using neutrons that he chose because they lacked an electrostatic charge. He attempted to create a chain reaction using beryllium and indium, but was unsuccessful (Esterer & Esterer, 1972). That summer, Szilárd collaborated with Fermi to develop the concept of the nuclear reactor. Uranium would be used as fuel. Earlier Fermi had demonstrated that neutrons were effectively captured by atoms if they were of low energy, because, applying quantum theory, it made the atoms appear to be larger targets (Segrè, 1970). To slow down secondary neutrons released by fissioning uranium nuclei, they proposed development of a graphite device, against which the fast, high-energy neutrons would collide, effectively slowing them down. With enough raw materials, their reactor could theoretically sustain a slow-neutron chain reaction, resulting in heat and radioactive byproducts.

Hahn, Strassman, Meitner and Frisch completed the first successful nuclear chain reaction experiment in 1939 (Smirnov, 2003). It was not until 1942 that the first nuclear reactor was built, named Chicago Pile-1, and subsequently the first chain reaction entirely controlled by man was accomplished (Fermi, 1946). Withdrawing the cadmium-coated rods that absorbed neutrons would increase neutron activity, thus leading to a self-sustaining chain reaction; re-inserting the rods would dampen the reaction.

Szilárd was responsible for the development of the Manhattan Project in 1939. Its purpose was to develop the first atomic weapon (Groves, 1962). It was not until 1945, when the first weapons were successfully developed. There were two types: the bomb used in the Hiroshima bombing was made of uranium-235 (U-235); the other, used in the Nagasaki bombing, was a plutonium bomb. The uranium bomb was a fission weapon, with a mass of U-235 fired down a gun barrel into another U-235 mass, rapidly creating critical mass resulting in an explosion (Graves, 1962). The plutonium bomb operated based on an implosion. A sub-critical sphere of fissile material was reduced into a smaller, denser form; when fissile atoms are packed together, the rate of neutron capture increases to critical mass (Graves, 1962).

In 1948, the first transistor was developed. (Bodanis, 2005). Its significance to science, and particularly to the electronic engineering community, was the development of a miniaturized, low-cost device that had an output power greater than its input power (Bodanis, 2005). The transistor may work either as a switch or as an amplifier and is used in many electronic applications applied to atomic physics devices.

In 1952, the first nuclear fusion weapon was developed. Fusion is the process whereby two nuclei are joined together to form a single, heavier nucleus. This is usually accompanied by the absorption or release of energy (Atzeni & Meyer-ter-Vehn, 2004). To detonate the weapon, a small fission device is set off; gamma and X-rays that are emitted first compress the fusion fuel, then heat it to a high temperature. The fusion reaction creates large numbers of high-speed neutrons, which can induce fission in materials not normally susceptible. By grouping together numerous stages with increasing amounts of fusion fuel, weapons may be created with an almost arbitrary yield (Atzeni & Meyer-ter-Vehn, 2004).

Many high-energy accelerators developed after World War II, produced numerous sub-atomic particles that challenged physicists to explain their existence and behavior. This was significant because previous theories had either to be modified or discarded. Particularly, the concept of parity. Parity conservation in quantum mechanics means that two physical systems, one of which is a mirror image of the other, must behave in identical fashion. In 1956, Lee and Yang empirically disproved this theory. They were able to show that parity conservation was not always the case (Lee & Yang, 1957). The implications upon future research were enormous.

Also in 1956, Reines and Cowan found the existence of neutrino interactions as proposed by Pauli in 1930. Pauli attempted to explain why electrons in beta decay were not emitting the full energy of nuclear transition. The neutrino has no charge and almost no mass, but could penetrate massively thick materials without any interaction (Franklin, 2003).

In 1960, Maiman developed the first functioning laser (Yariv, 1989). The gain medium of a laser is a material of controlled purity, size, concentration, and shape, which amplifies the beam by the process of stimulated emission. The gain medium absorbs energy, which raises some electrons into higher-energy quantum state, resulting in the output of the laser beam (Yariv, 1989). The utility of this development was far reaching. Today, lasers are used routinely in medicine, manufacturing, research, and the military.

During the next few years, physicists started realizing that existing theories failed to adequately explain the nature and behavior of newly discovered sub-atomic particles. In an attempt to start to explain this new phenomena, Gell-Mann and Zweig independently developed a model called “quark” to explain this behavior.

After chronologically explaining the origin of the concept of the atom, how it became defined, and the development of atomic theory from earliest history through the middle of the 20th Century, it may now be appropriate to account for the basis of its on-going utility. As demonstrated throughout the foregoing chronology, applying a heuristic approach allowed for the refinement and improvement of existing theories, and even discarding those that were no longer tenable. The molecular theory of matter starts with quantum and statistical mechanics. That is not to say that research has stopped. On the contrary, daily research findings are published which agree with or refute prior thinking and develop more advanced concepts. For example, in 1927 Heisenberg theorized that that no experiment could measure the position and momentum of a quantum particle simultaneously. In 1999, Boyd refuted this principle, stating:

[I]f I am observing coherent monochromatic light, at any point in time along the line A-B, I can predict each and every one of these factors: Wavelength, Frequency, Phase, Momentum, and Position of the photons, with complete certainty. The only limitation to accuracy of position is the time of emission accuracy that is related to the accuracy of the timer. NASA has developed a timer system accurate to 10 femtoseconds, with projections of improvements into the .001 femtosecond regime. Emission time is then not an issue over a premeasured course and thus location of the photon is known to within the accuracy limits of the timer. We can know momentum with certainty because the momentum of a photon is directly related to its frequency. If you know the frequency, you know the momentum. The other parameters mentioned above follow along similar lines. Thus, contrary to Heisenberg, I can know both the momentum and the position of the photon simultaneously, with absolute accuracy (Harris, 1999).

Again, an example of heuristics in action.

But how does this address present-day utility? With the advent of fusion almost half a century ago, new procedures were developed for atomic interactions. In 2008, a tabletop-sized particle accelerator was developed to provide nuclear fusion, with the practical application to detect explosives or to scan luggage at airports, and also as an important tool for a wide range of laboratory experiments (Saglime, 2008).

The discovery, description, and behavioral characteristics of sub-atomic particles have evolved alternatively, into the science of nanotechnology. One nanometer (nm) is 10-9 meter. Nanostructures are considered at the borderline of the smallest of human-made devices and the largest molecules of living systems. Our ability to control and manipulate nanostructures will make it possible to exploit new physical, biological and chemical properties of systems that are intermediate in size, between single atoms, molecules and bulk materials (Mansoon, 2005).

The laser could not have been developed without understanding and applying the underlying principles of atomic physics. Its development has resulted in numerous beneficial medical and industrial applications, as well as those related to weapons of mass destruction.

Application of the quark model to nonrelativistic approximation of baryon and charmonium decay yields accurate and detailed data, allowing for accurately predictable transitions between states (Yaouanc, 1988). Further application of the quark has led to development of a model of hybrid stars (Carroll, 2009).

In 2006 practical application of neutrino beams was found (Winter, 2006), specifically as applied to the prediction of earthquakes, as predicted by Tsareve in 1985 (Tsarev, 1985). And the list of accomplishments and advances goes on and on.

In conclusion, this paper’s focus has been on development of the concept of an atom chronologically through development of atomic theory. Has the development of atomic theory helped society and civilization? The answer is a resounding yes. The application of a heuristic approach has allowed scientists to revise, and sometimes discard, outdated theories with new theories.

Print Email Download Reference This Send to Kindle Reddit This

Share This Essay

To share this essay on Reddit, Facebook, Twitter, or Google+ just click on the buttons below:

Request Removal

If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal:

Request the removal of this essay.


More from UK Essays

Doing your resits? We can help!