Computer Basics

Only you can prevent computer illiteracy.

  • Course Length: 3 weeks
  • Course Type: Short Course
  • Category:
    • Life Skills
    • Technology and Computer Science
    • Middle School
    • High School

Schools and Districts: We offer customized programs that won't break the bank. Get a quote.

Get a Quote

These days, the only thing stopping computers from getting smaller is how small of a screen we're willing to look at. Sure, the size of our hands might slow us down, but there are ways to get around it. Looking at a screen that's at least 15 inches across? That's a right, not a privilege.

(Unless you're trying to fit it in your pocket. Then it's just annoying.)

At one point, though, computers couldn't fit casually in those high waisted boyfriend jeans (even if they still stick out a little now). Way back in the day, they barely fit inside a room. And they took pretty similar parts to the computers you use now. There's a lot going on between the screen and the keyboard. We're hitting up all those deep, dark secrets in this three-week course, from the foundation of the computer all the way to how people make money using Bitcoin. Anything from

  • computer hardware
  • operating systems
  • the internet
  • cloud computing
  • cybersecurity

is fair game, going from the nitty-gritty of how things function all the way up to their effects on society. You won't know everything, but you should last at least last ten minutes in a conversation with your fashion designer friend who "just doesn't get why people need bigger pockets." (Tell that to our spider web of a smartphone screen, Brad.)

Best of all: you won't even void the warranty to do it.


Unit Breakdown

1 Computer Basics - Computer Basics

Between the hardware, software, operating systems, internet, and cybersecurity, we'll cover everything except the brand of toothpaste Bill Gates uses. For that, you'll have to snoop through his trash like the rest of us conduct your own research.


Sample Lesson - Introduction

Lesson 1.02: A Brief History of Computers

The Roman Coliseum.
It took at least three days to make the ruins.
(Source)

Rome wasn't built in a day. Neither was Athens. Or New York, London, Cairo, Mumbai, Abuja, or Tokyo. Think about it. You need to build the roads, then the skeletons of the buildings, and then install electrical, gas, and plumbing lines. Then all the people have to move in. Then you need to organize a block party so everyone feels welcome.

That takes at least two days to get everything settled.

Unless you're over 80, computers have been around for your entire lifetime. Unless you're over 60, you've probably always thought of them as relatively small, fast, and easy to use. (Maybe the last one's an Under-40 one.) It wasn't always this way, though.

Give us a second to blow the dust off some old computer tech because we've got a story for you. Those newfangled gadgets you like so much weren't always tiny and super-helpful. In fact, they used to be a whole lot larger than the largest desktop computer you've ever seen. Those things will never be as hefty as the room-sized calculators IBM peddled back in the day.

To understand the computer's present, you have to understand the computer's past…and all the middle parts. Every long road has some landmarks. Let's take a trip down (random access) memory lane and talk about the history of the computer. From ENIAC to iMac, all the way to Skynet (kidding…probably), this yarn's well worth listening to.

And it'll only take us two days to tell it.


Sample Lesson - Reading

Reading 1.1.02a: Rise of the Machines

Computers weren't always the powerhouses we know today, but they sometimes got close to the size of houses.

Join us, if you will, on this flyby journey through the computer's history, from origin story to sequels to spinoff series to cult status. If you want the complete story, feel free to take a day, and deep dive through this complete timeline from the Computer History Museum's website.

We're going to start our highlight reel in medias res. Put on some Duke Ellington, and follow us back into the 1940s.

The 40s and 50s

Our tour begins with the ENIAC (also known as the Electronic Numerical Integrator And Computer), which John Mauchly and J. Presper Eckert built at UPenn from 1943 – 1945. This baby, sponsored by the U.S. Army, was meant to speed up calculations on ballistics calculations.

Before then, we had mechanical calculators, but they couldn't run at the speed of an electron. The ENIAC, being the first electric computer, could. Even if it took up an entire room to do it.

It wasn't until 1948 that a computer program ran on a computer for the first time. Before that point, they just ran on tracks or in fields (probably).

All of these computers ran on vacuum tubes, by the way. When electrons flowed in one direction in a tube, that tube represented a binary 1. When the electrons flowed in the opposite direction, that same tube became a 0.
(Source)

That was then. Just five years later, one of the first transistorized computers emerged from Manchester University. Instead of using vacuum tubes, which were huge and burned out frequently, computers could become much more efficient and durable by using the transistors we know and love today.

Another five years after that, the SAGE computer network (a kind of proto-internet) gave sage advice to the U.S. and Canada when keeping an eye on Soviet bombers.

Welcome to the 60s

It wasn't until 1962 that the first inklings of the personal computer came around. MIT's Lincoln Laboratories rolled out the LINC (Laboratory Instrument Computer). This was the first computer that didn't take up an entire room. Fancy.

Still, computers at this point had to run programs serially. Any program someone ran had to be the only thing running and only after that program finished could they run another. If that was still true of computers today, any time you'd want to open up Microsoft Word, you'd have to turn off iTunes and close your internet browser before even starting.

That worked okay—especially since music-playing apps weren't a thing yet—until the Atlas Computer came around and gave us the first example of a program that managed multiple programs all at once—the bread and butter of an operating system.

Still, in the 60s, IBM came up with a powerful new idea: making all their computers compatible with each other. That way, if you wrote a program for one computer, it would work on another computer, too. That line of computers was called the System/360 and it allowed businesses to buy smaller computers and add to them as their company grew.

Into the 70s

Things started to get really big in the 1970s. And by "big," we actually mean, "small," because it would've been pretty hard to get bigger than your typical 60s computer.

Just in 1971, Hewlett-Packard rolled out a pocket-sized calculator, Intel dropped the world's first microprocessor (which is a processor, but…tiny), Xerox introduced the first laser printer, and email exploded with Ray Tomlinson's adoption of the "@" sign.

1977 kept everything exciting with the first (erm, fourth) Star Wars movie. On the more computer-y end of things, the Apple II was the first personal computer to hit the market. Atari's Video Computer System also started that whole video game thing. This was also the year when ROM chips were first produced. The Force was strong with this year.

Big Hair, Bigger Computers in the 80s

1981 gave us the 3.5-inch floppy disk (think the real-world equivalent of the 'save' icon on most programs) and IBM's first entry to the personal computer scene. That computer came loaded up with Microsoft's Disk Operating System, or MS-DOS, the ancestor to the modern Microsoft Windows OS that most of us know and some of us love today (we'll get back to that and other operating systems later).

Then came 1984, when George Orwell's totalitarian nightmare came true. If you count a now infamous Apple commercial, that is. They also amazed citizens around Oceania the world with the introduction of the Macintosh, the first computer to have a mouse and graphical user interface (GUI). Fujio Masuoka also invented flash memory over at Toshiba. What a year.

The 90s and Computers

Things really started to go wild in the 1990s with the explosion of personal computers and the birth of the world wide web (which is actually different from the internet, by the way). In fact, 36 million people logged onto that web in 1996. The next year, IBM's Deep Blue chess computer bested world champ Garry Kasparov at his own game. Then Apple dropped the all-in-one iMac in 1998—in five colors.

Five. Colors. Compare that to their wide variety of silver, gold, or rose gold today. The 90s were a high time for colors in computer shells, if you ask us.

Who's bitter? We're not bitter.

A Post-Y2K Society

Not to be outdone, the new millennium

  • introduced Amazon's cloud-based Web Services.
  • added "to google" to the dictionary.
  • brought wispy haired Julian Assange and a team of journalists together in Iceland to form WikiLeaks.

And that was just 2006. The next year taught video gamers that the cake was a lie.

In general, though, we were starting to move towards being internet-oriented and cloud-oriented in particular. Services like Dropbox and Google Drive, along with the rise in tablets and smart phones, made the internet an integral part of every part of work and education. Instead of working on single computers or asking for information on servers, we began storing personal information on remote computers to access anywhere on any computer.

To match that integration of computers into everything, in 2015, the Federal Communications Commission mandated Internet Service Providers embrace net neutrality. That way, companies couldn't make you pay more to get better access to the internet when most people view it as a utility like water or electricity.

Then came the thing sci-fi writers have been saying for literally decades: AI could be dangerous. It wasn't until 2015 that Bill Gates, Stephen Hawking, and Elon Musk all agreed with your standard dystopic summer blockbuster: hyper-advanced artificial intelligence might be dangerous. Unfortunately, no one listened to them, and Skynet came online in 2017 and launched nuclear missiles all over the world, annihilating most of the human race. A writer drone generated the content you're reading now. Welcome to the future.

Kidding. Probably.

Humanity's still in control for now, but Gates, Hawking, and Musk really do think we should all be careful about how smart we make our computers.

No, that's not red light flashing in our irises. What are you talking about…?

(Source)


Sample Lesson - Reading

Reading 1.1.02b: Old MacDonald Had a Farm—E-i-e-I/O

Computers make our lives a whole lot easier, but you can't do much with them unless you have some way of giving them data. Things like keyboards and mice might sound so fundamental that Alan Turing used them when cracking Enigma but…he didn't.

In fact, with the first couple of computers, all we could do was physically move wires around to program the machine. No, really.

Adding two plus two might not be too complicated, but think about what it must take to do calculus-like equations. Talk about a lot of wires.

Luckily, IBM decided to adopt a different system using punch cards, an idea they borrowed from the textile industry. Imagine having a typo in one of those. (Source)

There's a reason we don't use punch cards or move wires any more: it's too long and cumbersome for…anything. Instead, let's talk about the other ways you can talk to a computer.

Something to keep in mind: any time a computer takes in information from the user, it needs a way to convert the mouse clicks, keyboard presses, and video feeds into binary data. Because that's the only language it knows. Pay attention to how each I/O device makes that translation work.

It's pretty cool, if you ask us.

Keyboards

Keyboards existed before computers. We felt the need to mention that. After all, their set-up (QWERTY) supposedly comes from the fact that the letter organization slowed typists down, keeping them from jamming their typewriters.

(There's a little dispute over that one, and you can check out the arguments courtesy of The Smithsonian if you're interested.)

Keyboards aren't necessarily efficient or optimized for us, but like a lot of things, we've been using them so long that any replacement system has a hard time catching on. Who wants to have spent all that time using one system only to have to replace it with a new one, areweright?

(For any other examples of that logic, see: arguments against adopting the metric system in the U.S., any discussion about technology in schools, or politics.)

But back to computers. Keyboards didn't become a viable way of talking to the computer until 1956. The brainy folks of MIT plugged a Flexowriter electric typewriter into their Whirlwind computer and voilà—keyboard input.

Keyboards—electric keyboards, to be specific—work thanks to a series of circuits that each individual key completes when pressed. Just like how the CPU acts like millions of light switches, keyboard keys can turn the circuit on when you press them. Computers hold a map of all the keys' locations and their relevant combinations, so that when you press a "p" you don't get a "q." Yep, even computers mind those.

Whether you're a staunch QWERTY lover or a new convert to the Dvorak fandom, most keyboard layouts throw the most common letters near the center of the keyboard called the home row.

(Source)

Who Stole the Mouse from Xerox Parc?

Mice might feel integral to your computer today (literally, if you're working on a laptop with a tracker pad), but nobody even thought of them before the invention of Graphical User Interfaces (GUIs). There were pens that could select things on the screen, but if you didn't have any images to work with, there wasn't any need to move around a screen when all you could do was write commands.

Then came Xerox's experimental lab: Palo Alto Research Center (Parc). Parc developed the first GUI and mouse combo in a computer called the Alto. Ever heard of it?

Yeah, we hadn't either. It wasn't too successful.

After that failure of a computer, they decided to partner with Steve Jobs and Apple to spread the mousey love.

At this point, there are two types of mice:

  • Ball mice
  • Optical mice

When you use a ball mice, your mouse has—wait for it—a giant ball that you use to roll it around. That ball has two rollers, one to the front and one to the side. The roller in front only moves if the ball (and the mouse) is moving forward or backward. The one to the side keeps track of sideways movement in the same way.

Each roller connects to a shaft with holes in it. That shaft has an LED light inside it. If you spin it, you'll see beams of light coming from those holes. A sensor watches how fast the shaft moves to figure out how fast the mouse is rolling and in which direction.

Pretty fancy, if you don't mind us saying.

Also pretty easy to break. If you keep rolling a ball like that across a desk, it's going to collect dirt, slowing the ball down. If you want to avoid that problem, an optical mouse might be more your style.

Instead of using balls attached to rollers attached to disks, optical mice just take thousands of pictures a second using an LED light to see the surface of the desk. It then finds little landmarks on the desk and determines the direction/speed by checking how far away the mouse is from different landmarks now than it was before—and how long it took to get there.

Just make sure your desk has some sort of visual texture going on. Otherwise the mouse won't see anything. That means no movement.

Sadness.

(Source 1, Source 2)

Disc Drives

We didn't have fancy MP3 Players or CDs growing up. Believe it or not, we used to stuff vinyl records into our computers and hope the machines knew how to turn plastic grooves into binary data. We're talking full on 12-inch LPs.

The 70s were a mess.

Yes, we're aware that optical disc drives aren't much of a "thing" anymore, what with all that digital data making it easier to push those CD-readers into the second-class status of external drives. Still, understanding how they work gives insight into how computers manage additional data, so stay with us.

Floppy disks—the inspiration behind most 'save' symbols in programs—stored data by magnetizing small iron particles on a metal disk. Just like in the CPU, if the iron particle is magnetized one way, it corresponds to a 1. If it's magnetized the other way, it's a 0. Starting to notice a pattern here?

Because of that magnetization, it's really easy to clear floppy disk data by making those iron particles all un-magnetized again. Because it's disk-shaped, the floppy disk doesn't make you go through all the data to find what you're looking for. Instead, you can go straight to the file.

To keep schmutz from getting on that disk and messing up the data, they'd put the disk inside a plastic casing. Before the hard casing, they were in a vinyl that was pretty…floppy. By the end of the Golden Age of Floppy Disks, though, that plastic was pretty hard.

Go figure.

IBM released the first drive that handled them in 1971 as a read-only kind of deal. In 1973, it could also save new data, though. (Check out what the inside looks like here.)

Then came the LaserDisc, a proto-DVD that let you watch a movie—as long as you had the LaserDisc drive, of course. This was back in 1978, by the way.

CDs hit the music scene in 1981, and the CD-ROM (like CDs except for storing general data instead of just music) followed a few years later.

Optical discs, like CDs, store data with a series of bumps burned into it via laser. These bumps also form a spiral, starting from the inside and rotating out to the edge. Optical drives read the bumps back and translate that pattern into binary. They can also add bumps to a blank CD using the laser technique if they have burning capabilities.

(Source 1, Source 2)

The Visuals

We already talked about the monitor a little bit in the last lesson, so we'll skip right to the point on how it works. LCD monitors, specifically, use—wait for it—LCD lights. Those LCD lights are made of two pieces of glass with some liquid crystal between them. Behind those pieces of glass, some light is shined through. While that light shines, an electric current flows through the crystal, causing it to let only certain lengths of light pass through.

Now zoom back out to the monitor. This guy's set up as a two-dimensional grid (kind-of like x,y coordinates) of transistors. To turn on a pixel, the smallest piece of a digital image, the computer turns on the entire row while charging a particular column. Since the rest of the column's turned off, the only thing that gets charged is the pixel at that row and column. (Source)

That pixel stays charged until the page refreshes, which happens all. The. Time. Any time something moves on the screen, in fact, the screen completely refreshes. If you think that must take a lot of energy…you're right. It's also the fastest way of doing things; computer users tend to dislike waiting on a laggy monitor.

Hey, we're just calling it like we see it.


Sample Lesson - Reading

Reading 1.1.02c: Viva la Graphics Revolution

If you had to take a wild guess, when would you say the computer graphics revolution probably kicked itself off? Keep in mind that ENIAC was created in 1946, and the personal computer revolution kicked off in 1976 with the Apple 1. Give it a minute. Think about it.

Feel a year bubbling to the surface? If you're like us and love using math to make some educated guesses, you might split the difference and pick a date right in the middle and…you'd be right. It all started in the early 1960s.

Check out "When a Bit Becomes a Pixel: The History of Computer Graphics" from the Computer History Museum to get the bird's eye view of how computer graphics evolved side-by-side with computers themselves.

Maybe something like the Genesis Sequence isn't too impressive today, but back then it was huge—any kind of computer-generated imagery (CGI) was.

All these graphics take a lot of computer memory, too. In the video, they mention a bit becoming a pixel, a series of bytes becoming a vector, and an equation becoming a curve. Think about that teapot they talked about (you'll see it everywhere in computer graphics courses, BTW). Think about just how many vectors and curves it takes to make that teapot look…like a teapot. Hint: it's a lot.

Once you start adding movement to the teapot (because you want to recreate Mrs. Potts or something), you'll need even more data. All in the name of tea—and art.

The GUI Side of Things

Besides the obvious application of making realistic tea party scenes in movies and video games, graphics are also used to make your computer easier to use. Take the screen you're using to read this lesson. It's all powered by computer graphics—graphical user interfaces (GUIs) and windowing, to be exact.

A GUI (pronounced "gooey", like every great cinnamon bun should be) lets the program grab info from a user in a way that makes sense to a non-cyborg. Back in the day, data was fed into the computer so that the computer would understand it, which basically meant in one continuous stream of numbers. Since the data wasn't formatted, writing/reading computer info left tons of room for misreadings and bugs.

Now we've got monitors that can make shapes and animations, though. Sure, when someone makes a program, they could still make you use the terminal, but they aren't quite that sadistic. Instead, they build a GUI to make the information clearer and more organized. Any time you

  • click a radio button
  • pick an option from a drop down box
  • enter data into a text box
  • make a selection using the cursor and mouse to point and click
  • open an app by clicking on an icon
  • divide groups of data into different tabs/windows/squares on the screen
  • pick a date by clicking on a calendar graphic

you're using GUIs to interact with the program. Check out the GUI below. It's filled with widgets—the different elements in the window that let you communicate something to the computer.

A GUI with tabs, panels, text fields, radio buttons, the works.
(Source)

The best part? You don't need to have a BS in Computer Science to use a computer. All thanks to GUIs.

Windowing is a type of GUI that started at Xerox Parc (just like the mouse) and first became widely available on Mac PCs in 1984. Yep, that's the same computer that made the mouse a standard computer feature. Go figure.

Windowing was meant to make users feel like the computer screen was like an actual desk with a bunch of papers or folders lying around, stacked on top of each other. Each window is its own GUI workspace that the user can work in like a piece of paper on a desk, shuffle around the way they might move something to the bottom of a to-do pile, or file away for later by saving and closing the window.

You'll know you're working on a windowing operating system when all the apps have frames around them that let you see where they're located on your desktop (erm, computer monitor).

You'll also know you're working on a windowing OS if your operating system was created after the mid-80s. It's a really popular paradigm. The only popular paradigm for personal computers, one could say.

You might think that Microsoft used the GUI windowing concept in their operating system series called Windows, and…you'd be right. But Apple pioneered them first.

Ironic, right?


Sample Lesson - Activity

Activity 1.02: Senior Teching

Seniors aren't exactly known for their tech savvy. It's not their fault. Things just move too fast in the world of tech.

And we aren't even talking about the latest meme reaction gif.

Technology is such an important part of how people communicate today, though. Try going a day without texting, checking your Facebook feed, or getting an e-card celebrating National Chocolate Chip Day, and you'll see what we mean. To help seniors connect with their friends and family—and get all the e-cards a person could ever want—you're going to make a poster explaining an I/O device.

Do well, and you'll get glory, honor, respect. Do not-so-well, and no one will open your e-card celebrating National Pogo Stick Appreciation Day.

The stakes couldn't be higher.

Step One

Pick your I/O, whether it's

  • hard disk drives
  • floppy disk drives
  • GUIs
  • keyboards
  • mice
  • monitors
  • windowing systems

and spend a couple of minutes (five to ten should do it) brainstorming the things your typical senior citizen wouldn't know about this tech. They might know what a floppy disk looks like, but do they know how they save information? Probably not. And they might not have any idea what a GUI even is even though they use it on every device they use.

Once you have three to four facts you know a senior citizen wouldn't know about, keep moving. Life's too short to list every part of a technology that people don't get.

Step Two

Pull out Microsoft Word, and write two to three sentences on each fact. Make sure these sentences are both informative and easy to read. You aren't writing a spec for techies, so try not to use words like "synergize" or "prototype." This is meant to be a resource for people who aren't so tech-savvy. It should read like that.

For example, if Shmoop was explaining how the hard drive worked, we might say:

The hard disk drive saves information by writing binary numbers to a disk using a "head" that magnetizes individual bits to be on or off. When the computer processes instructions, it has to break everything down into binary so that RAM can save the data.

It was hard, but we managed to avoid the word "synergy," and you can, too. Just be conversational; explain things in your own words.

Step Three

Give your poster

  • a nice, big title to introduce your technology.
  • three to four headers to match the facts you're explaining.

Then you'll be ready to add two images that represent your technology. One of these could totally be a diagram that helps to show what, exactly, is going on.

Make sure to source the images you use with a link that points back to its webpage. Shmoop does that all the time with our images, but here's an example:

A side-by-side comparison of a modern HDD and a historic one, which could fit about 30 of the modern ones inside it.
There's a reason laptops didn't exist until the 90s…
(Source)

Once you've got all that squared away, go ahead and submit your poster.