[To an ASTR 121 student, Jan. 1999] ...On the interpretation of science & how it is practiced. Yes, this was a quick discussion without any nuances. By using the word "pure" I don't mean to imply ethically pure or isolated from societal pressures like careers, incomes, and status. What I meant was simply: "divorced from possible applications." Astronomy is an excellent example of "pure" science in that definition, since there is almost no expectation that research in astronomy will produce technical applications in the short run. The people involved in science are, of course, subject to all the same pressures, foibles, and character flaws as most people. They make lots of mistakes, exercise bad judgement, and have strongly subjective biases (mostly in favor of their own ideas). However, the difference in terms of the results of science is that there is an EXTERNAL STANDARD for determining which ideas have merit. That is to compare to empirical evidence, and anyone is entitled to make those comparisons and draw their own conclusions. For every idea or interpretation which is advanced, there are always several countervailing ideas (the number increases with the importance of the issue being investigated). A number of people will have stakes, but different ones, in the outcome. The result is a continuing series of "shootouts," mostly in the written literature but sometimes at conferences, between the different ideas. And over time, the idea which best explains the data usually clearly emerges. The whole process encourages scientists to be as self-critical as possible, before others can parade their mistakes in public. This is the "self-correcting error mechanism" I talked about. Young scientists are particularly eager to find errors in their elders' work, chinks in the armor, because that's how they make names for themselves. Established ideas are therefore tested continuously, and important scientific ideas which have stood the test of time are really pretty robust. Yes, they may be found faulty on the basis of new evidence in the future, but they are consistent with what we know now. A good deal of federal funding on "basic" research is actually driven largely by curiosity, but this is on the very practical grounds that learning about the basic mechanisms of nature ultimately winds up producing useful technologies. That was a key lesson the government learned in World War II, and that is still the basis of much of the funding programs at NSF, NASA, NIH, and DOE. Thoughtful people know that research in quantum physics or nucleic acid structures will not produce applications this year or possibly for 50 years or possibly ever. But without the basic research, one cannot move beyond the current generation of technologies. If you focus only on obvious opportunities for applications, you will be stuck with the obvious. A good example of basic research producing (immediate) applications was the discovery of X-rays. Another was the discovery of the structure of DNA. That was made in 1953, but only in just the last few years has it begun to result in applications---and those will completely transform medicine within 25 years. I don't think you are expressing it, but there is a rather fashionable extreme school of thought among philosophers of science which concludes that there is no objectivity whatsoever in the results of science---that all is subjective and socially/politically constructed. According to this view, there is no more substance to the principles of science than to the 1988 Republican National Platform. As I suggested in my lecture, the easiest answer to this criticism is that the applications derived from science overwhelmingly demonstrate that science works. If the lights come on, the airplane flies, the antibiotic cures, then this "socially constructed" argument goes down the drain. I've often wondered what goes through the heads of these philosophers of science when they discover that their toasters actually produce toast. Thanks for your very pertinent questions --- Bob O'Connell