First coder

Konrad Zuse

Konrad Zuse was a German civil engineer, pioneering computer scientist, inventor and businessman. His greatest achievement was the world’s first programmable computer; the functional program-controlled Turing-complete Z3 became operational in May 1941. 

Born: 22 June 1910, Berlin, Germany

Died: 18 December 1995, Hünfeld, Germany

Spouse: Gisela Brandes (m. 1945–1995)

Children: Horst ZuseKlaus Peter ZuseMonika Zuse GrudenFriedrich Zusemore

Awards: Werner von Siemens RingWilhelm Exner MedalHarry H. Goode Memorial Award

Konrad Zuse, the creator of the first relay computer

Konrad Zuse

Konrad Zuse was born on 22 June, 1910, in Berlin (Wilmersdorf), the capital of Germany, in the family of a Prussian postal officer—Emil Wilhelm Albert Zuse (26.04.1873-14.05.1946) and Maria Crohn Zuse (10.01.1882-02.07.1957). Konrad had a sister, two years older Lieselotte (1908-1953).https://tpc.googlesyndication.com/safeframe/1-0-37/html/container.html

A drawing from Konrad Zuse

In 1912, the Zuse family leaved for Braunsberg, a sleepy small town in east Prussia, where Emil Zuse was appointed a postal clerk. From his early childhood Konrad started to demonstrate a huge talent, but not in mathematics, or engineering, but in painting (look at the fabulous chalk drawing nearby, made by Zuse in his school-time).

Konrad went too young to the school and enrolled the humanistic Gymnasium Hosianum in Braunsberg. After his family moved to Hoyerswerda (Hoyerswerda is a town in the German Bundesland of Saxony), he passed his Abitur (abitur is the word commonly used in Germany for the final exams young adults take at the end of their secondary education) at Reform-Real-Gymnasium in Hoyerswerda. After the graduation the young Konrad fall in a state of uncertainty, what to study later—engineering or painting. The film Metropolis of Fritz Lang from 1927 impressed pretty much Konrad. He dreamed to design and build a giant and impressive futuristic city as Metropolis and even started to draw some projects. So finally he decided to study civil engineering at the Technical College (Technischen Hochschule) in Berlin-Charlottenburg.nullADVERTISINGnullnull

During his study he worked also as bricklayer and bridge builder. During this time the traffic lights were introduced into Berlin, causing a total chaos in the traffic. Zuse was one of the first people, who tried to design something like a “green wave”, but unsuccessful. He was also very interested in the field of photography, and designed an automated systems for development of band negatives, using punch cards as accompanying maps for control purposes. Later on he devised a special system for film projections, so called Elliptisches Kino.

The next major project of the young dreamer was the conquest of space. He dreamed to build bases on the moons of the outer planets of Solar System. In this bases will be built a fleet of rockets, each with a hundred or two hundred people passengers, capable to fly with a speed one-thousandth the speed of light, so to reach the nearest fixed star for thousand years.nullADVERTISINGnullnull

The future city Metropolis, the automatic photo lab, the elliptical cinema, the space project—all this is only a small part of the technical ideas, preparing the invention of the computer. After the graduation from Technischen Hochschule in 1935, he started as a design engineer at the Henschel Flugzeugwerke (Henschel aircraft factory) in Berlin-Schönefeld, but resigned a year later, deciding to devote entirely to the construction of a computer. From 1935 till 1964 Zuse was almost entirely devoted to the development of the first relays computer in the world, the first workable programmable computer in the world (see computers of Zuse), the first high-level computer language in the world, etc.

In January 1945 Konrad Zuse married to one of his employees—Gisela Ruth Brandes. On November, 17, the same year was born their first son—Horst, which will follow his eminent father and will get a diploma degree in electrical engineering and a Ph.D. degree in computer science. Later on were born Monika (1947-1988), Ernst Friedrich (1950-1979), Hannelore Birgit (1957) and Klaus-Peter (1961).https://tpc.googlesyndication.com/safeframe/1-0-37/html/container.html

After 1964, the Zuse KG was no longer owned and controlled by Konrad Zuse. It was a heavy blow for Zuse to loose his company, but the active debts were too high. In 1967 he received another blow, because the German patent court rejected his patent applications and Zuse lost his 26 year fight about the invention of the Z3 with all its new features (click here, to see the Zuse’s first patent application from 1941).

An oil painting from Konrad Zuse

An oil painting from Konrad Zuse (1979) (Source: http://www.epemag.com/zuse)

But in 1960s the retired Zuse was still a man, full of energy and ideas. He started to write an autobiography (published in 1970), made a lot of beautiful oil paintings (see the upper image), reconstructed his first computer (Z1), etc. In 1965, he was given the Werner von Siemens Award in Germany, which is the most prestigious technical award in Germany. In the same 1965 Zuse received the Harry Goode Memorial Award together with George Stibitz in Las Vegas.

In 1969 Zuse published Rechnender Raum, the first book on digital physics. He proposed that the universe is being computed by some sort of cellular automaton or other discrete computing machinery, challenging the long-held view that some physical laws are continuous by nature. He focused on cellular automata as a possible substrate of the computation, and pointed out that the classical notions of entropy and its growth do not make sense in deterministically computed universes.

In 1992 Zuse started his last project—the Helix-Tower (see the lower image), a variable height tower, for catching wind in order to produce energy in an easier way, build from uniformly shaped and repeatable elements. The propeller and wind generator had to be mounted on the top of the tower. Zuse used a very elegant mechanical construction and immediately received a patent for this in 1993. The height of the tower could be modified by adding or subtracting building blocks.

Konrad Zuse with the project of his Helix-Tower

Konrad Zuse with the project of his Helix-Tower (Source: http://www.epemag.com/zuse)

Konrad Zuse must be credited (alone or with other inventors) for the following pioneering achievements in the computer science:
  1. The use of the binary number system for numbers and circuits.
  2. The use of floating point numbers, along with the algorithms for the translation between binary and decimal and vice versa.
  3. The carry look-ahead circuit for the addition operation and program look-ahead (the program is read two instructions in advance, and it is tested to see whether memory instructions can be performed ahead of time).
  4. The world’s first complete high-level language (Plankalkül).

This remarkable man, Konrad Zuse, died from a heart attack on 18 December, 1995, in Hünfeld, Germany.

Father of modern coding

Dennis Ritchie

Dennis MacAlistair Ritchie was an American computer scientist. He created the C programming language and, with long-time colleague Ken Thompson, the Unix operating system and B programming language.

Born: 9 September 1941, Bronxville, New York, United States

Died: 12 October 2011, Berkeley Heights, New Jersey, United States

Nationality: American

Books: The C Programming Language. 2nd Edition.

Awards: Turing AwardNational Medal of Technology and Innovation.

Education: John A. Paulson School Of Engineering And Applied Sciences .

Adam Osborne

Adam Osborne was a British-American author, book and software publisher, and computer designer who founded several companies in the United States and elsewhere. He introduced the Osborne 1, the first commercially successful portable computer. 

Born: 6 March 1939, Bangkok, Thailand

Died: 18 March 2003, Kodaikanal

Nationality: American

Parents: Arthur Osborne

Books: An Introduction to Microcomputers

Michael Faraday

Michael Faraday FRS was an English scientist who contributed to the study of electromagnetism and electrochemistry. His main discoveries include the principles underlying electromagnetic induction, diamagnetism and electrolysis. 

Born: 22 September 1791, Newington Butts, London, United Kingdom

Died: 25 August 1867, Hampton Court Palace, Molesey, United Kingdom

Nationality: BritishKnown for: Faraday’s law of inductionElectrochemistryFaraday effectFaraday cagemore

Awards: Royal Society Bakerian MedalCopley MedalRoyal MedalRumford Medalmore

John McCarthy

John McCarthy (September 4, 1927 – October 24, 2011) was an American computer scientist and cognitive scientist. McCarthy was one of the founders of the discipline of artificial intelligence.[1] He coined the term “artificial intelligence” (AI),[2] developed the Lisp programming language family, significantly influenced the design of the ALGOL programming language, popularized time-sharing, invented garbage collection, and was very influential in the early development of AI.Professor
John McCarthy
Ph.D.John McCarthy at a conference in 2006

BornSeptember 4, 1927
Boston, Massachusetts, U.S.

DiedOctober 24, 2011 (aged 84)
Stanford, California, U.S.

Alma materPrinceton UniversityCalifornia Institute of Technology

Known forArtificial intelligenceLispcircumscriptionsituation calculus

AwardsTuring Award (1971)
Computer Pioneer Award (1985)
IJCAI Award for Research Excellence (1985)
Kyoto Prize (1988)
National Medal of Science (1990)
Benjamin Franklin Medal (2003)

Scientific careerFieldsComputer science

InstitutionsStanford UniversityMassachusetts Institute of TechnologyDartmouth CollegePrinceton UniversityDoctoral advisorSolomon Lefschetz

Doctoral studentsRuzena Bajcsy
Ramanathan V. Guha
Barbara Liskov
Raj Reddy

McCarthy spent most of his career at Stanford University.[3] He received many accolades and honors, such as the 1971 Turing Award for his contributions to the topic of AI,[4] the United States National Medal of Science, and the Kyoto Prize.

Early life and education

John McCarthy was born in Boston, Massachusetts, on September 4, 1927, to an Irish immigrant father and a Lithuanian Jewish immigrant mother, John Patrick and Ida (Glatt) McCarthy. The family was obliged to relocate frequently during the Great Depression, until McCarthy’s father found work as an organizer for the Amalgamated Clothing Workers in Los Angeles, California. His father came from the fishing village of Cromane in County Kerry, Ireland.[6] His mother died in 1957.[7]

McCarthy was exceptionally intelligent, and graduated from Belmont High School two years early.[8] McCarthy was accepted into Caltech in 1944.

McCarthy showed an early aptitude for mathematics; during his teens he taught himself college mathematics by studying the textbooks used at the nearby California Institute of Technology (Caltech). As a result, he was able to skip the first two years of mathematics at Caltech.[9] McCarthy was suspended from Caltech for failure to attend physical education courses.[10] He then served in the US Army and was readmitted, receiving a B.S. in mathematics in 1948.[11]

It was at Caltech that he attended a lecture by John von Neumann that inspired his future endeavors.

McCarthy initially completed graduate studies at Caltech before moving to Princeton University. He received a Ph.D. in mathematics from Princeton in 1951 after completing a doctoral dissertation, titled “Projection operators and partial differential equations“, under the supervision of Donald C. Spencer.

Academic career

After short-term appointments at Princeton and Stanford University, McCarthy became an assistant professor at Dartmouth in 1955.

A year later, McCarthy moved to MIT as a research fellow in the autumn of 1956.

In 1962, McCarthy became a full professor at Stanford, where he remained until his retirement in 2000. By the end of his early days at MIT he was already affectionately referred to as “Uncle John” by his students.[13]

McCarthy championed mathematical logic for artificial intelligence.

Contributions in computer scienceEdit

McCarthy in 2008

John McCarthy is one of the “founding fathers” of artificial intelligence, together with Alan TuringMarvin MinskyAllen Newell, and Herbert A. Simon. McCarthy coined the term “artificial intelligence” in 1955, and organized the famous Dartmouth conference in Summer 1956. This conference started AI as a field.[8][14] (Minsky later joined McCarthy at MIT in 1959.)

In 1958, he proposed the advice taker, which inspired later work on question-answering and logic programming.

McCarthy invented Lisp in the late 1950s. Based on the lambda calculus, Lisp soon became the programming language of choice for AI applications after its publication in 1960.[15]

In 1958, McCarthy served on an ACM Ad hoc Committee on Languages that became part of the committee that designed ALGOL 60. In August 1959 he proposed the use of recursion and conditional expressions, which became part of ALGOL.[16] He was a member of the International Federation for Information Processing (IFIP) IFIP Working Group 2.1 on Algorithmic Languages and Calculi, which supports and maintains ALGOL 60 and ALGOL 68.[17]

Around 1959, he invented so-called “garbage collection” methods to solve problems in Lisp.[18][19]

He helped to motivate the creation of Project MAC at MIT when he worked there, and at Stanford University, he helped establish the Stanford AI Laboratory, for many years a friendly rival to Project MAC.

McCarthy was instrumental in creation of three of the very earliest time-sharing systems (Compatible Time-Sharing SystemBBN Time-Sharing System, and Dartmouth Time Sharing System). His colleague Lester Earnest told the Los Angeles Times: “The Internet would not have happened nearly as soon as it did except for the fact that John initiated the development of time-sharing systems. We keep inventing new names for time-sharing. It came to be called servers … Now we call it cloud computing. That is still just time-sharing. John started it.”[8]

In 1961, he was perhaps the first to suggest publicly the idea of utility computing, in a speech given to celebrate MIT’s centennial: that computer time-sharing technology might result in a future in which computing power and even specific applications could be sold through the utility business model (like water or electricity).[20] This idea of a computer or information utility was very popular during the late 1960s, but had faded by the mid-1990s. However, since 2000, the idea has resurfaced in new forms (see application service providergrid computing, and cloud computing).

In 1966, McCarthy and his team at Stanford wrote a computer program used to play a series of chess games with counterparts in the Soviet Union; McCarthy’s team lost two games and drew two games (see Kotok-McCarthy).

From 1978 to 1986, McCarthy developed the circumscription method of non-monotonic reasoning.

In 1982, he seems to have originated the idea of the space fountain, a type of tower extending into space and kept vertical by the outward force of a stream of pellets propelled from Earth along a sort of conveyor belt which returns the pellets to Earth. Payloads would ride the conveyor belt upward

Other activities

McCarthy often commented on world affairs on the Usenet forums. Some of his ideas can be found in his sustainability Web page,[22] which is “aimed at showing that human material progress is desirable and sustainable”. McCarthy was a serious book reader, an optimist, and a staunch supporter of free speech. His best Usenet interaction is visible in rec.arts.books archives. And McCarthy actively attended SF Bay Area dinners in Palo Alto of r.a.b. readers called rab-fests. He went on to defend free speech criticism involving European ethnic jokes at Stanford.

McCarthy saw the importance of mathematics and mathematics education. His Usenet .sig for years was, “He who refuses to do arithmetic is doomed to talk nonsense”; his license plate cover read, similarly, “Do the arithmetic or be doomed to talk nonsense.”[23][24] He advised 30 PhD graduates.[25]

His 2001 short story “The Robot and the Baby”[26] farcically explored the question of whether robots should have (or simulate having) emotions, and anticipated aspects of Internet culture and social networking that have become increasingly prominent during ensuing decades.[27]

Personal lifeEdit

McCarthy was married three times. His second wife was Vera Watson, a programmer and mountaineer who died in 1978 attempting to scale Annapurna I Central as part of an all-women expedition. He later married Carolyn Talcott, a computer scientist at Stanford and later SRI International.[28][29]

Personal lifeEdit

McCarthy was married three times. His second wife was Vera Watson, a programmer and mountaineer who died in 1978 attempting to scale Annapurna I Central as part of an all-women expedition. He later married Carolyn Talcott, a computer scientist at Stanford and later SRI International.[28][29]

Chuck Hall

The first 3D printer, which used the stereolithography technique, was created by Charles W. Hull in the mid-1980s

Chuck Hull is the co-founder, executive vice president and chief technology officer of 3D Systems. He is an inventor of the solid imaging process known as stereolithography, the first commercial rapid prototyping technology, and the STL file format. 

Born: 12 May 1939 (age 81 years), Clifton, Colorado, United States

Nationality: American

Organization founded: 3D Systems

Awards: IRI Achievement Award

Parents: Esther HullLester Hull

Siblings: Mary Rene Royer

Q&A with Chuck Hull, Co-Founder, 3D Systems

At 74, having invented a new booming manufacturing industry, building up one of the biggest additive manufacturing companies in the world from scratch and earning himself a spot in the 2013 IW Manufacturing Hall of Fame, Charles “Chuck” Hull has more than earned a restful retirement.

But he just can’t seem to give it up.

He works today as the executive vice president and chief technology officer of his company, 3D Systems, still tinkering in the lab and fighting out new innovations and new applications for his signature 3-D printing technology.

“I’m old enough that I should have retired long ago,” he told me recently. “But it’s such an interesting field that you need to be constantly involved. I want to help make it happen.”

Last month, I was able to drag him from his lab for a few minutes to discuss his work, his vision and the impact it has had on the world.

Q: As the story goes, you invented the first 3-D printing technology — stereolithography — in backroom lab at a UVP back in 1983. And now, after 30 years of work and development, the whole additive manufacturing market has suddenly blown up and people think it’s brand new. What do you think about this boom? Is it overdue?

When we first started 3-D printing all those years ago, I didn’t expect it to become mainstream for a long time. At the time, I said 25 years, but I thought it would take even longer.

That’s the history of all inventions. People don’t invent things of this ilk and then all of a sudden people are beating on your door and everybody does it. It takes a long time to recognize what it is and it takes a long time to perfect the craft.

3-D printing isn’t easy. You see a machine, you think it’s straightforward and easy, but it’s not. It takes a long time to figure out technically. Really, we were perfecting the craft for the first 10 years at 3D Systems, taking it from an idea to substance to something that was good even at the industrial level.

But then these last couple of years have sort of surprised me. I’ve been immersed in the struggle of this for all these years and suddenly to have people like IndustryWeek recognize this as more mainstream or more common is definitely a surprise.

The Origin Story

Q: Could you describe where the idea began? How do you invent 3-D printing — in a form as technical and complicated as stereolithography — out of nothing?

My history had been design engineer. In that field, whenever we got into designing new injection molded plastic parts, it was a very time consuming and expensive process.

The process then was, you design the part, then do blueprints of the part, discuss it with a toolmaker who would make the mold for the plastic part. Then that mold would go to a molder who would inject that first part. At least six weeks later, maybe eight weeks, you would see your first part.

That took a long time, but kind than that, the part would never be quite right, so you’d have to redesign, do some changes to the tooling and cycle it again.

So it would be months and months just developing a first article that you could test.

That was the way the world was back then and everybody struggled with that.

My goal was to see if I could come up with a way to get that first article quicker so you could do the iterations quickly and then finally tool for production.

So, I basically invented all of the ideas that wouldn’t work and then finally got on to what was ultimately sterolithography. And on March 9, 1983, I made the first part that way.

Q: And from there, I know, you got you worked out a patent in 1986 and co-founded 3D Systems later that year. But who were your clients back then? Was there any industry that saw the potential that early?

Once we started the company, we kind of put our feelers out to see if there was interest out there. And there was. Actually, there was a huge interest in prototyping, mainly from the automotive segment.

The automotive companies at the time were trying to turn out new cars, high quality cars. And at the time U.S. auto companies were not being very responsive. They couldn’t quickly turn out new designs and the new designs that they did turn out weren’t world-competitive.

So there was a lot of interest in any kind of technology that would help improve that. And that got us our start right away doing products and developing technology for automobiles. Shortly after that lots of other manufacturers jumped into that for the same thing.

Also, in those early years we developed methods to prototype metal parts and to do short run production on metal parts. The method there was to come up with an alternate pattern method for investment casting, which is the traditional lost-wax casting method. That was probably the first major deviation from prototyping plastic parts.

That became very successful. Lots of companies, lots of foundries were having the same problems getting to first article quickly enough.

And so we developed this method and we called it ‘Quick Cast’ to quickly cast a metal part for a large variety of metal alloys. It’s still used today. It’s a major application in aerospace and related industries.

The Future of 3-D Printing

Q: So now that it’s taken off and manufacturers and consumers are starting to realize the full value of 3-D printing, where does it go next? Will it ever really compete with traditional manufacturing?

I’m not a futurist. I don’t have a crystal ball that tells me what things are going to happen, but I know this: when you get enough smart people working on something, it always gets better.

Printing pistons for engines, for example? That may or may not happen. Right now, there are perfectly good ways to make a lot of components without the help of 3-D printing.

With 3-D printing, the real strengths so far are complexity and customization. If you have a manufacturing process where you need a lot of detail or a lot of differentiation between parts, that’s where 3-D printing can play.

That’s why medical applications are a natural fit in 3-D printing because all bodies are different. When you try to manufacturing something for teeth, for example, they all have to be different for each patient. The same for knees and joints.

So if you’re looking at the future, you’re going to see more in that area—manufacturing with complex shapes, complex patterns, even in high volume production.

That said, the speed and the cost effectiveness of 3-D printing are constantly moving. Over time, you compete better and better with traditional manufacturing.

Q: Looking back from this 30-year milestone, what would you say is your biggest accomplishment—besides the technology itself?

Back in the 80s and 90s, there was this whole attitude that manufacturing was all going to be done offshore. This was the attitude not only the U.S., but all over the word. Everything was moving to low-labor-cost countries.

I have never thought this was a good thing. My view is that manufacturing should be a core capability for a country, particularly in the U.S.

That attitude is coming back to our country today and European countries — there needs to be a core competency in manufacturing. And today that has come to mean a higher technology capability.

Helping that come about, not just with 3-D printing, but a lot of digital manufacturing, and being part of that movement makes me feel pretty good.

What are Disadvantages of Technology

Before discuss this topic first of all discuss what is technology Technology – It is an technique which dedicated for generating tools, processing and the removal of using of materials and hand work.  Technology is defined as a products and processes used to simplify several tasks and removal of hand work in which it takes […]

What are Disadvantages of Technology
Design a site like this with WordPress.com
Get started