Wednesday, May 20, 2015

Fetch decode execute cycle

The Fetch-Decode-Execute cycle of a computer is the process by which a computer: fetches a program instruction from its memory, determines what the instruction wants to do, and carries out those actions. This cycle is repeated continuously by the central processing unit (CPU), from bootup to when the computer is shut down. In modern computers this means completing the cycle billions of times a second! Without it nothing would be able to be calculated. Registers/circuits involved Edit

The circuits used in the CPU during the cycle are:

Program Counter (PC) - an incrementing counter that keeps track of the memory address of which instruction is to be executed next...

Memory Address Register (MAR) - the address in main memory that is currently being read or written

Memory Buffer Register (MBR) - a two-way register that holds data fetched from memory (and ready for the CPU to process) or data waiting to be stored in memory

Current Instruction register (CIR) - a temporary holding ground for the instruction that has just been fetched from memory

Control Unit (CU) - decodes the program instruction in the CIR, selecting machine resources such as a data source register and a particular arithmetic operation, and coordinates activation of those resources

Arithmetic logic unit (ALU) - performs mathematical and logical operations

Wednesday, April 29, 2015

Tackling the CIE Exam Papers

The following material is original to the authors and the University of Cambridge Local
Examinations Syndicate are not responsible for the advice given. Please note that the information
that follows is not necessarily true for all questions. There will always be exceptions to general
rules and so these are intended only as a loose guide.
 
General tips
In many cases, you can find clues about the answer within the question itself. The main ‘clues’ are:
• the number of marks given for the question
• the key instruction words e.g. Name, Describe, Compare, Evaluate ...
• instruction/question text written in bold typeface.

The number of marks
Th e number of marks awarded for a question is given in square brackets [ ] after each question or
sub part of a question. Typically (though not always), the number of marks gives an indication of
how many different points you need to make. For example:

Explain what is meant array?                                                                            [2]

There are potentially 5 or 6 different points you could make to answer this question successfully.
However, as it is only worth 2 marks, the examiner is most likely only looking for any 2 correct
comments. You could spend 20 minutes writing 6 accurate comments but you would still only
score 2 marks because this is the maximum allowance for the question.

You won’t lose marks for writing extra answers, but you could waste time writing points that won’t
score you any more marks. Also, the examiner might decide to only look at the first 2 points and
ignore the rest. So, make sure you always put the answers you are certain are correct first before
any additional answers.

Th e number of marks indicates the likely number of separate points you need to make and it is also
a good indicator of how much time you should spend on that particular question. For example, it
is more sensible to spend much longer writing the answer to a question worth 5 marks than one
worth 1 mark. BUT this is not always the case. For some questions the number of marks does not
indicate how many comments you need to make. So, you should always take the time to think
about the questions and your answer before committing pen to paper.

Key instruction words
Th e examination questions will oft en have a single word that tells you how much detail is required
for each answer. Typically, the following is true:

• State, Name, List means that all you have to do is give the name of what is being asked for.
• Describe, Explain means you should say how something works (this will depend on the actual question, it
   might not always be how but it infers more detail than simply listing a name).
• Compare means you should say how both things work and sometimes what are the differences between
   them.
• Evaluate means you should say how well or poorly the object/method/application etc. works, or how
   suitable something is for a given scenario.
• Justify means you should say why the object/method/application etc. has been chosen.

Think about what each of these words is asking for in the question. This can really help you write a
good answer.

Instruction text written in bold or capitals
Th is applies to general text within the instruction/question as well as the key instruction words
mentioned above. When certain text is given in bold or sometimes CAPITALS it is meant to draw
your attention to it and emphasise its meaning.

Often a question will ask you to give ‘two examples of ’ where the word ‘two’ is written in bold
typeface. Th is means that you have to supply at least two examples in order to get the marks even
if the answer is only worth 1 mark! In other words, it is a clue to the minimum number of different
points you need to make regardless of the number of marks. Similarly, if the question included the
instruction to ‘Evaluate’ it means that you have to do more than simply state the name of a term.
You have to also discuss its advantages and disadvantages, for example. In such cases, you are being
given a direct and clear instruction and you must follow it in order to get the marks.

Examination technique
An hour of revision the night before the examination is a good idea. It is not a good idea to cram
in as much revision as possible at the last minute by spending hours revising the night before; or to
stay up into the early hours of the next morning.

What you really need to do is:

• get a good night’s sleep and relax in order to be in a positive frame of mind
• make sure you have the correct equipment (remember to take spare pens)
• when you get to the examination room:
– arrange your equipment on the desk and relax
– before you write anything, quickly read through all the questions on the examination paper,
   the last question is not always the hardest
– start by answering a couple of the easier questions to boost your confidence, then tackle the
   more diffi cult questions
– an alternative would be to answer the questions in increasing level of difficulty so that you
   leave the more difficult questions until the end
– remember to keep an eye on the time, don’t spend too long on any one question, even if you
   know everything there is to know on that particular topic. Th e question paper will usually
   tell you how many marks there are for each question and so you can use this to work out
   how much time to spend on each question
– in cases where you have to write in the answer, make sure that you answer every question
   even if you are having to guess. You cannot score marks if you don’t give an answer, but an
   educated guess could end up scoring you marks.

   Make sure you take whatever approach works for you. We are all different and work in different ways.
   Above all, READ THE QUESTION THOROUGHLY.



Wednesday, April 22, 2015

TCP/IP and Mac Address

MAC address:

A media access control address (MAC address) is a unique identifier assigned to network interfaces for communications on the physical network segment. MAC addresses are used as a network address for most IEEE 802 network technologies, including Ethernet and WiFi. Logically, MAC addresses are used in the media access control protocol sublayer..

MAC addresses are most often assigned by the manufacturer of a network interface controller (NIC) and are stored in its hardware, such as the card's read-only memory or some other firmware mechanism. If assigned by the manufacturer, a MAC address usually encodes the manufacturer's registered identification number. It may also be known as an Ethernet hardware address (EHA), hardware address or physical address. This can be contrasted to a programmed address, where the host device issues commands to the NIC to use an arbitrary address.


What is TCP/IP?

TCP/IP stands for Transmission Control Protocol / Internet Protocol. It defines how electronic devices (like computers) should be connected over the Internet, and how data should be transmitted between them.
TCP - Transmission Control Protocol
TCP is responsible for breaking data down into small packets before they can be sent over a network, and for assembling the packets again when they arrive.
IP - Internet Protocol
IP takes care of the communication between computers. It is responsible for addressing, sending and receiving the data packets over the Internet.

TCP/IP Protocols For the Web

Web browsers and servers use TCP/IP protocols to connect to the Internet. Common TCP/IP protocols are:
HTTP - Hyper Text Transfer Protocol
HTTP takes care of the communication between a web server and a web browser. HTTP is used for sending requests from a web client (a browser) to a web server, returning web content (web pages) from the server back to the client.
HTTPS - Secure HTTP
HTTPS takes care of secure communication between a web server and a web browser. HTTPS typically handles credit card transactions and other sensitive data.
FTP - File Transfer Protocol
FTP takes care of transmission of files between computers.

What is SSL?

What is SSL?

SSL stands for Secure Sockets Layer and is one security protocol that is used on the Internet. This is the technology that will show a “lock icon” and/or a green address bar on the browser to let people know that they’re visiting a website that is secured with SSL / TLS. (Note: TLS refers to Transport Layer Security, which is a broad security protocol that SSL fits into.)


Simply, SSL is a way to encrypt data that is sent from a web browser (like Internet Explorer, Firefox, or Chrome) to the web server. While it was primarily used in the past to protect sensitive information like credit card numbers and other data, these days it’s becoming used on a wider basis.

Without encryption, any information sent from the web browser to the web server can fall prey to a man-in-the-middle attack – which refers to bad guys grabbing the data after it leaves the browser and before it reaches the server. By encrypting the data going from a browser to a server, it’s possible to make man-in-the-middle attacks more difficult to pull off successfully.

Understanding Public Key Cryptography

To understand how SSL protects sensitive data, you need to know a thing or two about public key cryptography. While this deals with a lot of very complex math, we’re going to skip over a lot of the technical details in this guide and give you just the basic information you need to know in order to understand how SSL works.
Basically, to use an SSL connection, a public key and a private key are used. The web browser uses the public key to encrypt the data and the server uses the private key to decode the information. Instead of encrypting and decrypting keys every time a connection is made – which would take a lot of processing power – a symmetric key is created after the initial communication between the browser and server.

Establishing an SSL Connection

Next, I'm going to give a very basic outline of the process of establishing an SSL connection.
  1. Browser requests a HTTPS webpage
  2. Web Server sends public key and certificate
  3. Browser examines the SSL Certificate
  4. Browser creates a symmetric key and sends it to server
  5. Web server decrypts symmetric key with its private key
  6. Web server sends browser the page with symmetric key
  7. Browser decrypts the data and displays page
What’s amazing is that all that happens very quickly – without most people even noticing it’s going on under the hood.

Monday, March 9, 2015

Parity Bit and Parity Errors



Parity Bit

7 bits of data
(count of 1 bits)
8 bits including parity
even
odd
0000000
0
00000000
00000001
1010001
3
10100011
10100010
1101001
4
11010010
11010011
1111111
7
11111111
11111110

A parity bit, or check bit is a bit added to the end of a string of binary code that indicates whether the number of bits in the string with the value one is even or odd. Parity bits are used as the simplest form of error detecting code. There are two variants of parity bits: even parity bit and odd parity bit.

In the case of even parity, the number of bits whose value is 1 in a given set are counted. If that total is odd, the parity bit value is set to 1, making the total count of 1's in the set an even number. If the count of ones in a given set of bits is already even, the parity bit's value remains 0.

In the case of odd parity, the situation is reversed. Instead, if the sum of bits with a value of 1 is odd, the parity bit's value is set to zero. And if the sum of bits with a value of 1 is even, the parity bit value is set to 1, making the total count of 1's in the set an odd number.

Even parity is a special case of a cyclic redundancy check (CRC), where the 1-bit CRC is generated by the polynomial x+1. If the parity bit is present but not used, it may be referred to as mark parity (when the parity bit is always 1) or space parity (the bit is always 0).

Parity
In mathematics, parity refers to the evenness or oddness of an integer, which for a binary number is determined only by the least significant bit. In telecommunications and computing, parity refers to the evenness or oddness of the number of bits with value one within a given set of bits, and is thus determined by the value of all the bits. It can be calculated via an XOR sum of the bits, yielding 0 for even parity and 1 for odd parity. This property of being dependent upon all the bits and changing value if any one bit changes allow for its use in error detection schemes.

Error detection
If an odd number of bits (including the parity bit) are transmitted incorrectly, the parity bit will be incorrect, thus indicating that a parity error occurred in the transmission. The parity bit is only suitable for detecting errors; it cannot correct any errors, as there is no way to determine which particular bit is corrupted. The data must be discarded entirely, and re-transmitted from scratch. On a noisy transmission medium, successful transmission can therefore take a long time, or even never occur. However, parity has the advantage that it uses only a single bit and requires only a number of XOR gates to generate. See Hamming code for an example of an error-correcting code. Parity bit checking is used occasionally for transmitting ASCII characters, which have 7 bits, leaving the 8th bit as a parity bit. For example, the parity bit can be computed as follows, assuming we are sending simple 4-bit values 1001.

Type of bit parity
Successful transmission scenario
Even parity
A wants to transmit: 1001
A computes parity bit value: 1+0+0+1 (mod 2) = 0
A adds parity bit and sends: 10010
B receives: 10010
B computes parity: 1+0+0+1+0 (mod 2) = 0
B reports correct transmission after observing expected even result.
Odd parity
A wants to transmit: 1001
A computes parity bit value: 1+0+0+1 + 1 (mod 2) = 1
A adds parity bit and sends: 10011
B receives: 10011
B computes overall parity: 1+0+0+1+1 (mod 2) = 1
B reports correct transmission after observing expected result.

This mechanism enables the detection of single bit errors, because if one bit gets flipped due to line noise, there will be an incorrect number of ones in the received data. In the two examples above, B's calculated parity value matches the parity bit in its received value, indicating there are no single bit errors. Consider the following example with a transmission error in the second bit using XOR:

Type of bit parity error
Failed transmission scenario
Even parity
Error in the second bit
A wants to transmit: 1001
A computes parity bit value: 1^0^0^1 = 0
A adds parity bit and sends: 10010
...TRANSMISSION ERROR...
B receives: 11010
B computes overall parity: 1^1^0^1^0 = 1
B reports incorrect transmission after observing unexpected odd result.
Even parity
Error in the parity bit
A wants to transmit: 1001
A computes even parity value: 1^0^0^1 = 0
A sends: 10010
...TRANSMISSION ERROR...
B receives: 10011
B computes overall parity: 1^0^0^1^1 = 1
B reports incorrect transmission after observing unexpected odd result.

There is a limitation to parity schemes. A parity bit is only guaranteed to detect an odd number of bit errors. If an even number of bits have errors, the parity bit records the correct number of ones, even though the data is corrupt. (See also error detection and correction.) Consider the same example as before with an even number of corrupted bits:

Type of bit parity error
Failed transmission scenario
Even parity
Two corrupted bits
A wants to transmit: 1001
A computes even parity value: 1^0^0^1 = 0
A sends: 10010
...TRANSMISSION ERROR...
B receives: 11011
B computes overall parity: 1^1^0^1^1 = 0
B reports correct transmission though actually incorrect.
B observes even parity, as expected, thereby failing to catch the two bit errors.

Usage
Because of its simplicity, parity is used in many hardware applications where an operation can be repeated in case of difficulty, or where simply detecting the error is helpful. For example, the SCSI and PCI buses use parity to detect transmission errors, and many microprocessor instruction caches include parity protection. Because the I-cache data is just a copy of main memory, it can be disregarded and re-fetched if it is found to be corrupted.
In serial data transmission, a common format is 7 data bit, an even parity bit, and one or two stop bits. This format neatly accommodates all the 7-bit ASCII characters in a convenient 8-bit byte. Other formats are possible; 8 bits of data plus a parity bit can convey all 8-bit byte values.
In serial communication contexts, parity is usually generated and checked by interface hardware (e.g., a UART) and, on reception, the result made available to the CPU (and so to, for instance, the operating system) via a status bit in a hardware register in the interface hardware. Recovery from the error is usually done by retransmitting the data, the details of which are usually handled by software (e.g., the operating system I/O routines).

Thursday, February 12, 2015

CPU and Registers

How does a CPU use the registers?

This is the first part in, I hope, a long series of short articles, which will explain some of the basics about the internal design of CPUs.

In this first part we’ll look at the registers, or more specifically, how the registers are used. Registers?

A CPU usually contains a register file. A register file consists of a number of registers plus status bits, which give information about the registers; for instance whether an overflow occurred. Registers are needed because a CPU cannot directly work with data that is stored in the memory. So if the CPU wants to work with data, it needs to copy this data to the registers and afterwards copy them back. When a CPU executes an instruction this usually takes 5 steps:

Instruction Fetch: the instruction(s) that need to be executed are grabbed from the memory, or more often (instruction) cache. Instruction Decode: the instruction is translated to the commands that are specific for the CPU.

Instruction Execute: the instruction is executed. Memory Access: if needed (only for load, store and brand instructions) data is written or read from the memory or cache.

Write Back: if needed (only for instructions where a result needs to be stored in a register) the result of the execution step is stored into a register.

The question is now, how do the registers fit in here. In general there are four ways to use the registers. I'll illustrate this with a simple calculation:

C = A + B

Now this calculation can be done via accumulator, stack, register-memory or register-register type of register-usage. Accumulator The calculation would require three steps. Load A Add B Store C Only one register is used, the Accumulator. Into this register, first, memory data A is placed. Then memory data B is added to the contents of the register. The register then contains the result of A+B.

Finally, the result is placed in memory C. This is the simplest way ofusing a register, but of course also a very limited one, since complex instructions that require more different values are much more difficult to code. Also a problem is that in each step the memory is accessed.

Memory is much slower than register-space and therefore the CPU clock is limited by the memory-speed. Therefore this way of register usage is often only used in microcontrollers. Stack A more efficient way of using registers is stack-based: Push A Push B Add Pop C The easiest way to explain this is to draw a picture.

After the first instruction - push A - the value of A is placed onto the stack, here with three registers. After the second instruction - push B - the first value is pushed down into the stack and the value B is placed on top. The third instruction adds the values of the two top-most registers in the stack, pops the topmost value off the stack which causes the value A to shift back to the topmost position. Then the result is placed on this topmost place and therefore overwrites value B with the result A+B.

The fourth and last instruction simply pops this A+B value off the stack and puts the result in memory place C. As one can see this is an improvement compared to the accumulator. Now more than one register is available for calculation. Therefore less memory-access is required. This way of register usage is for instance used in x87-based CPUs, better known as the FPU of PC processors such as the Intel Pentium III and AMD Athlon. But also modern Java processors such as the pico-Java I use stack-based registers. Of course the downside is obvious.

It is only possible to use the topmost registers. This limits the freedom of register-usage. This is also one of the reasons why even the powerful FPU units of the Pentium III and AMD Athlon are often no match for most RISC CPUs like the Alpha 21164 or MIPS R12000. Register-Memory A more efficient way of using registers is the register-memory way: Load reg1, A Add reg1, B Store C, reg1 In this simple example, only one register is used but it is not difficult to see that more freedom exists than in the stack-based case. Each register in the register-file can be accessed directly, and not just the top-most. This technique is often used in CISC based CPUs.

The reason lies in the past. The ‘glory-days’ of CISC lay in about 1960 to 1975. The reason for this is that memory was extremely expensive in this period. Therefore systems were equipped with relatively slow memory. To understand the importance of this we must look back at the 5 steps required for executing an instruction. Step 1, the instruction fetch, is determent by the speed of the memory (including cache memory).

All following steps (besides maybe storage back to memory) are CPU-speed dominated. CPU's in that time were partially pipelined, but to make things easier to understand we will assume they were fully pipelined. We would get: Figure 2, Fetch is the dominating step. Here we see a few cycles. As the picture show the decode-step takes less time than the fetch. However because during the decode step already a new fetch of another instruction takes place - remember the chip is assumed to be pipelined - the time saved cannot be used. So with those chips getting maximum performance was simply an issue of doing as much as possible in as little as possible instructions.

No matter how complex instructions are, they will still require less time than a fetch. This explains why CISC has many complex instructions and little instructions for a task are needed. Of course the downside is that increasing MHz is difficult with such a complex chip. However this was no issue since the (cache) memory wouldn't be able to catch up anyway. This however changed around 1975 when cheap and fast memory arrived. Register-Register When fast memory chips were available things changed. Now it didn't matter anymore how many cycles were needed for a task. If you pipeline a chip the effective throughput will be 1 instruction per cycle anyway. So now it was just a case of getting as much MHz out of the chip as possible. As a result of this register usage changed. Load reg1, A Load reg2, B Add reg1, reg2 Store C, reg1 More instructions are needed than in the previous case, but fewer actions need to be performed. Therefore it is easier to increase the MHz of a chip, which results in higher performance.

This is a policy very common in RISC designs. One downside is however that more registers are needed. Also the chip design is often kept simple in order to prevent nasty instructions, that are difficult to be optimised, from becoming bottlenecks. Therefore RISC chips often have, besides many registers, little instructions and deep pipelines. Final Words These are the four basic ways to use registers. Of course real world CPU's will not always fit in one category and make use of combinations. For instance the AMD Athlon is a CISC chip on the outside but RISC-like on the inside. Its FPU-design is still stack-based however. Still, I hope this article gave you some basic knowledge of how registers are being used in a chip

Wednesday, February 11, 2015

What is data encryption?

What is encryption?

Encryption is a technique for transforming information on a computer in such a way that it becomes unreadable. So, even if someone is able to gain access to a computer with personal data on it, they likely won’t be able to do anything with the data unless they have complicated, expensive software or the original data key. The basic function of encryption is essentially to translate normal text into ciphertext. Encryption can help ensure that data doesn’t get read by the wrong people, but can also ensure that data isn’t altered in transit, and verify the identity of the sender.

3 different encryption methods

There are three different basic encryption methods, each with their own advantages (list courtesy of Wisegeek):

Hashing
Hashing creates a unique, fixed-length signature for a message or data set. Each “hash” is unique to a specific message, so minor changes to that message would be easy to track. Once data is encrypted using hashing, it cannot be reversed or deciphered. Hashing, then, though not technically an encryption method as such, is still useful for proving data hasn’t been tampered with.

Symmetric methods
Symmetric encryption is also known as private-key cryptography, and is called so because the key used to encrypt and decrypt the message must remain secure, because anyone with access to it can decrypt the data. Using this method, a sender encrypts the data with one key, sends the data (the ciphertext) and then the receiver uses the key to decrypt the data.

Asymmetric methods
Asymmetric encryption, or public-key cryptography, is different than the previous method because it uses two keys for encryption or decryption (it has the potential to be more secure as such). With this method, a public key is freely available to everyone and is used to encrypt messages, and a different, private key is used by the recipient to decrypt messages.

Uses of hexadecimal numbers

Hexadecimal refers to the base-16 number system, which consists of 16 unique symbols, in contrast to the ten unique symbols of the commonly used decimal (i.e., base 10) numbering system. The numbers 0 through 9 are the same in both systems; however, the decimal numbers 10 through 15 are represented by the letters A through F. Thus, for example, the decimal number 11 is represented by B in the hexadecimal system and decimal 14 is represented by E.

The hexadecimal system is commonly used by programmers to describe locations in memory because it can represent every byte (i.e., eight bits) as two consecutive hexadecimal digits instead of the eight digits that would be required by binary (i.e., base 2) numbers and the three digits that would be required with decimal numbers. Some of the ports addresses are also in hexa-decimal.

In addition, it is much easier for humans to read hexadecimal numbers than binary numbers, and it is not much more difficult for computer professionals to read hexadecimal numbers than decimal numbers. 

Moreover, conversion between hexadecimal and binary numbers is also easy after a little practice. For example, to convert a byte value from hexadecimal to binary, all that is necessary is to translate each individual hexadecimal digit into its four-bit binary equivalent. 

Hexadecimal numbers are indicated by the addition of either an 0x prefix or an h suffix. For example, the hexadecimal number 0x2F5B translates to the binary number 0010 1111 0101 1011. int 0x80 is the assembly language instruction that is used to invoke system calls in Linux on x86 (i.e., Intel-compatible) processors. The 0x in it indicates that it is not a decimal 80 but rather a hexadecimal 80 (which is a decimal 128). A system call is a request in a Unix-like operating system made by a process for a service performed by the kernel.

A common use of hexadecimal numbers is to describe colors on web pages. Each of the three primary colors (i.e., red, green and blue) is represented by two hexadecimal digits to create 255 possible values, thus resulting in more than 16 million possible colors. For example, the HTML (hypertext markup language) code telling a browser to render the background color of a web page as red is and that telling it to render the page as white is .

Hexadecimal numbers

Hexadecimal

In mathematics and computinghexadecimal(also base 16, or hex) is a positional numeral system with a radix, or base, of 16. It uses sixteen distinct symbols, most often the symbols 09 to represent values zero to nine, and A, B, C, D, E, F (or alternatively af) to represent values ten to fifteen. Hexadecimal numerals are widely used by computer systems designers and programmers. Several different notations are used to represent hexadecimal constants in computing languages; the prefix "0x" is widespread due to its use in Unix and C (and related operating systems and languages). Alternatively, some authors denote hexadecimal values using a suffix or subscript. For example, one could write 0x2AF3 or 2AF316, depending on the choice of notation.

As an example, the hexadecimal number 2AF316 can be converted to an equivalent decimal representation. Observe that 2AF316is equal to a sum of (200016 + A0016 + F016 + 316), by decomposing the numeral into a series of place value terms. Converting each term to decimal, one can further write: (216 × 163) + (A16 × 162) + (F16 × 161) + (316 × 160),
(2 × 4096) + (10 × 256) + (15 × 16) + (3 × 1), or 10995.

Each hexadecimal digit represents four binary digits (bits), and the primary use of hexadecimal notation is a human-friendly representation of binary-coded values in computing and digital electronics. One hexadecimal digit represents a nibble, which is half of an octet or byte (8 bits). For example, byte values can range from 0 to 255 (decimal), but may be more conveniently represented as two hexadecimal digits in the range 00 to FF. Hexadecimal is also commonly used to represent computer memory addresses.

Tuesday, January 27, 2015

Introduction to Communication

 Introduction to Communication Interface


1. Parallel Data Transmission

Parallel ports were originally developed by IBM as a way to connect a printer to a PC. When IBM was in the process of designing the PC, the company wanted the computer to work with printers offered by Centronics, a top printer manufacturer at the time. IBM decided not to use the same port interface on the computer that Centronics used on the printer. Instead, IBM engineers coupled a 25-pin connector, DB-25, with a 36-pin Centronics connector to create a special cable to connect the printer to the computer. Other printer manufacturers ended up adopting the Centronics interface, making this strange hybrid cable an unlikely de facto standard.
When a PC sends data to a printer or other device using a parallel port, it sends 8 bits of data (1 byte) at a time. These 8 bits are transmitted parallel to each other. The standard parallel port is capable of sending 50 to 100 kilobytes of data per second.
Advantages of Parallel Data Transmission:
  •  Fastest form of transmission -- able to send multiple bits simultaneously
  •  doesn’t require high frequency of operation
Disadvantages of Parallel Data Transmission:
  •  Requires separate lines for each bit of a word
  •  Costly to run long distances due to multiple wires
  •  Suffers from electromagnetic interference
  •  Cable lengths more limited than a serial cable

Applications: Parallel ports can be used to connect a host of popular computer peripherals: such as prints, scanners, CD burners, external hard drives, Iomega zip, network adapters, and tape backup drives.

Types of parallel port
At the present time it is known four types of parallel port:
  • Standard parallel port (SPP)
  • Parallel port PS/2 (bidirectional)
  • Enhanced Parallel Port (EPP)
  • Extend Capability Port (ECP)
SPP/EPP/ECP The original specification for parallel ports was unidirectional, meaning that data only traveled in one direction for each pin. With the introduction of the PS/2 in 1987, IBM offered a new bidirectional parallel port design. This mode is commonly known as Standard Parallel Port (SPP) and has completely replaced the original design. Bidirectional communication allows each device to receive data as well as transmit it. Many devices use the eight pins (2 through 9) originally designated for data. Using the same eight pins limits communication to half-duplex, meaning that information can only travel in one direction at a time. But pins 18 through 25, originally just used as grounds, can be used as data pins also. This allows for full-duplex (both directions at the same time) communication.

2. Serial Data Transmission:
1. Synchronous Data Transmission
Data is transmitted one bit at a time, using a clock to maintain integrity between words.
Advantages:
  • Only one (half duplex) or two (full duplex) wires are required to send/receive data.
  • Low cost due to low number of wires.
Disadvantages:
  • Lower speeds than parallel transmissions.
  • Difficult to maintain data integrity due to problems with synchronizing clocks.
2. Asynchronous Data Transmission:
Data is transmitted on bit at a time using start bits and strop bits to maintain integrity between words.
Disadvantages:   Lower speeds than parallel transmissions.


                                    
Key words

Baud Rate:
The measure of the number of signal elements transmitted or received per second. Baud rates and data bit rates (bps-bit per second) are not equal in asynchronous transmission due to the start and stop bits.
Start Bit:
The bit preceding every word that signals the receiver a data word is coming. In some microcontroller (e.g., HC11) the start bit is logic low (0), while in others the start bit is logic high (1).
Parity Bit:
A bit sometimes added to the end of the data word. There are three possible settings for the parity: none, even, and odd. The setting represents the sum of the 1’s transmitted.
Stop Bit:
The bit or bits following every word that signals the end of a data word. In some microcontroller (e.g., HC11) the stop bit is logic high (1), while in others the start bit is logic low (0).
Half Duplex:
Two-way serial communication using only one line. With half duplex, the device can not transmit and received at the same time.
Full Duplex:
Two-way serial communication using two lines. With full duplex, data can be simultaneously transmitted and received.

Computer Systems (FAST TRACK)

1     Data representation   1.1    Number systems   How and why computers use binary to represent all forms of data? •     Any form ...