Crash Course Computer Science Host
Do you want to host Crash Course? Do you love Computer Science? Are you gifted with the rare skill of reading from a teleprompter? Audition to be our host!
Fri Jul 8 18:01:18 2016 - permalink -
Crash Course Computer Science will be a 40-episode series. We're looking for someone who's enthusiastic about computers and can talk about a complicated subject with a sense of humor.
This is a paid position and will require travel (unless you happen to live in Indianapolis, IN)
We will require you to audition by hosting an abbreviated episode of Crash Course. Below are a few excerpts from scripts we will be filming - please record yourself reading them in your most enthusiastic crash-course-y way and submit a link to an unlisted YouTube video. Please try not to memorize anything but instead read from a teleprompter, your phone, laptop, paper off screen, etc as this will give you the closest experience to what it'll be like on set. If you need to restart a line or paragraph, feel free to do so.
CC COMP SCIENCE
Ep1. EARLY COMPUTING
Hello world, I’m _____ and welcome to CrashCourse Computer Science!
Over the course of this series, we’re going to go from bits, bytes, transistors and logic gates, all the way to Operating Systems, E-Commerce, and Robots. No, we’re not going to teach you how to program. Instead, we’re going to be exploring a broad swath of computing topics, both as a discipline and as a technology.
Computers have become the lifeblood of our world. If they were to suddenly turn off, all at once, the power grid would shut down, cars would crash, planes would fall, water treatment plants would stop, stock markets would freeze, trucks with food wouldn’t know where to deliver, employees wouldn’t get paid.
Even most non-computer objects - like DFTBA shirts and the chair I’m sitting on – are made by in factories powered by computers. Computing touches nearly every aspect of our daily lives.
And this isn’t the first time we’ve seen this sort transformation through technology. Advances in manufacturing during the industrial Revolution brought a new scale to human systems - in agricultural, industry and domestic life. Mechanization meant superior harvests and readily available food, mass produced goods, cheaper and faster travel and communication, and in general, a better quality of life.
Computing technology is doing the same right now – from precision agriculture (or automated farming) and medical equipment, to global telecommunications and educational opportunities, and very soon, new frontiers like Virtual Reality and Self Driving Cars. We are living in an time likely to be remembered as the Electronic Age.
Ep4. REPRESENTING NUMBERS AND LETTERS
Because ASCII was such an early standard, it became widely used, and critically, allowed different computers built by different companies to exchange data. This ability to universally exchange information is called “interoperability”. However, it did have a major limitation: it was really only designed for English.
Fortunately, there are 8 bits in a byte, not 7, and it soon became popular to use codes 128 through 255, previously unused, for "national" characters. In the US, those extra numbers were largely used to encode additional symbols, like mathematical notation, graphical elements, and common accented characters.
On the other hand, while the Latin characters were used universally, Russian computers used the extra codes to encode Cyrillic characters, and Greek computers, Greek letters, and so on.
Nevertheless, national characters codes worked pretty well for most countries. The problem was, if you opened an email written in Latvian on a Turkish computer, the result was completely incomprehensible.
Things totally broke with the rise of computing in Asia, as languages like Chinese and Japanese have thousands of characters. There was no way to encode all those characters in 8-bits! In response, each country invented multi-byte encoding schemes, all of which were mutually incompatible. The Japanese were so familiar with this encoding problem that they had a special name for it: "mojibake" (mo-gee-bah-kay), which means "scrambled text".
And so it was born – Unicode – one format to rule them all. Devised in 1992 to finally do away with all of the different international schemes, and replace them with one universal encoding scheme. Unicode has space for over a million codes - enough for every single character from every language ever used – more than 120,000 of them in over 100 types of script – plus space for mathematical symbols and even graphical characters like Emoji :)
You might be wondering about types of media beyond numbers and letters - like photos, movies and music. In the same way that ASCII defines a scheme for encoding letters as binary numbers, other file formats – such as MP3s or GIFs – use binary numbers to encode sound or colors of a pixel.
Most important is to recognize that under the hood, it all comes down to long sequences of bits. Text messages, this YouTube video, every webpage on the internet, and even your computer’s operating system, are nothing but long sequences of 1s and 0s.
Next week, we’ll start talking about how your computer starts manipulating those binary sequences, for our first true taste of computation. Thanks for watching. See you next week.