So you're thinking of buying a new computer... Where do you start? There are so many brands and models of computers available, and it can all be a little overwhelming when you start to look around. How do you decide what type of computer you need? And perhaps more importantly, how do you decide what the best value is? I have sold computers professionally for almost 20 years, and there are certain "tricks of the trade" that most computer stores and salespeople use. Knowing these secrets can make your decision easier and will help you buy the right computer for your needs. 1. Buy What You Need, Maybe a Little More One of the most important things you can do when buying a new computer is make a list of the things that you will be using it for. There are so many different models - with different capabilities - that you can easily buy more, or less, than you really need if you don't. If this is your first computer, this can be a little tougher. Until you've used a computer, it's hard to know exactly what you might want to do with it beyond the obvious, like connecting to the internet. Regardless, you should think about some of the things you might want to do. Some possibilities include: - Connect to the internet - Play games - Digital photography - Digital video - Type documents - Accounting - Design websites - Programming - Digital scrapbooking - Geneology Some of these things need more power than others. For example, connecting to the internet really doesn't need a lot of power. Even the most basic computer available will probably work just fine. Digital video and many games need a lot more power. If you don't get a fast enough computer with enough memory, you'll be disappointed with the performance. Knowing what you're going to be using your computer for will help your salesperson, whether they're on the phone, the internet or standing in front of you, recommend the best system for your needs. As a general rule you're always better off buying more power than you need rather than less, but buying too much can be a waste of money. 2. Warranty Considerations Computer warranties are one of the most confusing and obscure parts of your purchase. Most manufacturers have cut back on their customer service to the point where poor service has become a given. The three most common options are onsite, carry-in or manufacturer's depot service. Onsite service can be helpful, but think about whether you want to have to be available for a technician to come and diagnose your computer, and possibly have to come back with parts at another time. Carry in service is a good option, but find out whether the service center is factory authorized for warranty repairs, as well as whether the technicians are all certified. Shipping your computer to a factory service center can take a long time - sometimes a number of weeks. It also creates risk that your computer will be damaged or even lost in shipping. In some cases, the manufacturer will even replace your computer with another unit and ship it back to you, rather than repairing it. This can result in your losing any information that was on your system and having to reload all your software. Another aspect of the warranty to find out about is technical support. Find out if the computer manufacturer offers a toll-free phone number and what the quality of service is like. The better computer salespeople will be honest about this and tell you if a company's service leaves something to be desired. You can also do some research on the internet - most of the computer magazines like PC Magazine and PC World have annual customer service comparisons that rate the larger computer companies. Always find out how the warranty is handled before making your decision. Even if it doesn't influence your choice, knowing what to expect if something does go wrong will save some nasty surprises down the road. 3. Can You Negotiate the Price Down? A computer is a relatively large investment - anywhere from a few hundred to a few thousand dollars. Many computer buyers expect that there is a significant amount of "wiggle room" on the price. The reality is that most computer hardware - the physical pieces like the computer, monitor and printer - is sold at very low profit margins. Often, computer systems are even sold at or below the dealer cost. When you're buying a computer, it never hurts to ask for a better deal, but don't be surprised if you only get a few dollars off, if anything. Over the close to 20 years I've sold computers, I watched the profit margins go from over 40% to less than 5%. It's almost embarassing to offer a $20 discount on a $2500 computer system, but that could mean the difference between making and losing money on the sale. What you can do to get the best price is to do some comparison shopping. Most computer stores offer price-matching guarantees, so if you find your computer for less at another store, most dealers will match or beat that price, even if it means they lose money. 4. How Do Computer Stores Make Any Money? You might be wondering how these computer stores make any money if they're selling computer for so little profit. Their money is made on add-on items. The highest profit areas in most computer stores are cables and "consumable" products such as printer ink and paper. Printer ink is a huge money-maker for most computer stores (even more so for the printer manufacturers). Why is this? Once you've bought a printer, you're going to have to replace your ink at some point, and continue to replace it as it runs out. Most chain computer stores and office supply stores that carry a large selection of ink cartridges make more from ink than they do from the computers themselves. Cables also have huge markups. A cable that costs the store $2-3 will often sell for $20-30. That's ten times their cost! If you're buying a new computer, you will likely need to buy some cables. Some items - printers, for example - don't often include the cables needed to hook them up. Many printers also come with "starter" ink cartridges that are only half-full. You might also want to pick up some extra ink cartridges. This is where you should be able to negotiate a better price. Don't expect the salesperson to throw them in for nothing, but they should be willing to offer you a better price. After all, if you're happy with their service, you'll probably continue to buy your ink, paper and other products from that store in the future. 5. What Software is Included? The last secret of buying a new computer has to do with the software that is included. Most new computer systems include quite a few programs and sometimes the value of the software can be quite high. Something to watch out for when looking at the included software is "trial versions" or "limited editions". Many programs that are preloaded are either crippled versions that don't have all the features of the full program, or trial versions that will only run for a certain amount of time before they expire. Computer are often sold with trial versions of the following types of software: - antivirus - firewall - MS Office or other office suites - Accounting - both business and personal The computer manufacturers generally don't make it easy to tell whether the software on their systems are trial versions or limited versions. This is a question that you should specifically ask if you can't find the answer in their promotional information. If you're buying a new computer with trial versions of the software, keep in mind that you will need to pay to continue using it after the trial period is over. This is an added cost that you need to consider as part of your overall budget. These five "secrets" of buying a new computer are fairly common sense, but they are not always made clear up front. Knowing what to ask will help you in two ways. First, you can be sure you are getting the right computer for your needs. Second, if the salesperson or company that you're dealing with explains these things to you without being asked, you'll know you're dealing with someone who is honest and upfront. Knowing you can trust the people you're dealing with is an invaluable feature of your new computer system.
So you're thinking of buying a new computer... Where do you start? There are so many brands and models of computers available, and it can all be a little overwhelming when you start to look around. How do you decide what type of computer you need? And perhaps more importantly, how do you decide what the best value is? I have sold computers professionally for almost 20 years, and there are certain "tricks of the trade" that most computer stores and salespeople use. Knowing these secrets can make your decision easier and will help you buy the right computer for your needs. 1. Buy What You Need, Maybe a Little More One of the most important things you can do when buying a new computer is make a list of the things that you will be using it for. There are so many different models - with different capabilities - that you can easily buy more, or less, than you really need if you don't. If this is your first computer, this can be a little tougher. Until you've used a computer, it's hard to know exactly what you might want to do with it beyond the obvious, like connecting to the internet. Regardless, you should think about some of the things you might want to do. Some possibilities include: - Connect to the internet - Play games - Digital photography - Digital video - Type documents - Accounting - Design websites - Programming - Digital scrapbooking - Geneology Some of these things need more power than others. For example, connecting to the internet really doesn't need a lot of power. Even the most basic computer available will probably work just fine. Digital video and many games need a lot more power. If you don't get a fast enough computer with enough memory, you'll be disappointed with the performance. Knowing what you're going to be using your computer for will help your salesperson, whether they're on the phone, the internet or standing in front of you, recommend the best system for your needs. As a general rule you're always better off buying more power than you need rather than less, but buying too much can be a waste of money. 2. Warranty Considerations Computer warranties are one of the most confusing and obscure parts of your purchase. Most manufacturers have cut back on their customer service to the point where poor service has become a given. The three most common options are onsite, carry-in or manufacturer's depot service. Onsite service can be helpful, but think about whether you want to have to be available for a technician to come and diagnose your computer, and possibly have to come back with parts at another time. Carry in service is a good option, but find out whether the service center is factory authorized for warranty repairs, as well as whether the technicians are all certified. Shipping your computer to a factory service center can take a long time - sometimes a number of weeks. It also creates risk that your computer will be damaged or even lost in shipping. In some cases, the manufacturer will even replace your computer with another unit and ship it back to you, rather than repairing it. This can result in your losing any information that was on your system and having to reload all your software. Another aspect of the warranty to find out about is technical support. Find out if the computer manufacturer offers a toll-free phone number and what the quality of service is like. The better computer salespeople will be honest about this and tell you if a company's service leaves something to be desired. You can also do some research on the internet - most of the computer magazines like PC Magazine and PC World have annual customer service comparisons that rate the larger computer companies. Always find out how the warranty is handled before making your decision. Even if it doesn't influence your choice, knowing what to expect if something does go wrong will save some nasty surprises down the road. 3. Can You Negotiate the Price Down? A computer is a relatively large investment - anywhere from a few hundred to a few thousand dollars. Many computer buyers expect that there is a significant amount of "wiggle room" on the price. The reality is that most computer hardware - the physical pieces like the computer, monitor and printer - is sold at very low profit margins. Often, computer systems are even sold at or below the dealer cost. When you're buying a computer, it never hurts to ask for a better deal, but don't be surprised if you only get a few dollars off, if anything. Over the close to 20 years I've sold computers, I watched the profit margins go from over 40% to less than 5%. It's almost embarassing to offer a $20 discount on a $2500 computer system, but that could mean the difference between making and losing money on the sale. What you can do to get the best price is to do some comparison shopping. Most computer stores offer price-matching guarantees, so if you find your computer for less at another store, most dealers will match or beat that price, even if it means they lose money. 4. How Do Computer Stores Make Any Money? You might be wondering how these computer stores make any money if they're selling computer for so little profit. Their money is made on add-on items. The highest profit areas in most computer stores are cables and "consumable" products such as printer ink and paper. Printer ink is a huge money-maker for most computer stores (even more so for the printer manufacturers). Why is this? Once you've bought a printer, you're going to have to replace your ink at some point, and continue to replace it as it runs out. Most chain computer stores and office supply stores that carry a large selection of ink cartridges make more from ink than they do from the computers themselves. Cables also have huge markups. A cable that costs the store $2-3 will often sell for $20-30. That's ten times their cost! If you're buying a new computer, you will likely need to buy some cables. Some items - printers, for example - don't often include the cables needed to hook them up. Many printers also come with "starter" ink cartridges that are only half-full. You might also want to pick up some extra ink cartridges. This is where you should be able to negotiate a better price. Don't expect the salesperson to throw them in for nothing, but they should be willing to offer you a better price. After all, if you're happy with their service, you'll probably continue to buy your ink, paper and other products from that store in the future. 5. What Software is Included? The last secret of buying a new computer has to do with the software that is included. Most new computer systems include quite a few programs and sometimes the value of the software can be quite high. Something to watch out for when looking at the included software is "trial versions" or "limited editions". Many programs that are preloaded are either crippled versions that don't have all the features of the full program, or trial versions that will only run for a certain amount of time before they expire. Computer are often sold with trial versions of the following types of software: - antivirus - firewall - MS Office or other office suites - Accounting - both business and personal The computer manufacturers generally don't make it easy to tell whether the software on their systems are trial versions or limited versions. This is a question that you should specifically ask if you can't find the answer in their promotional information. If you're buying a new computer with trial versions of the software, keep in mind that you will need to pay to continue using it after the trial period is over. This is an added cost that you need to consider as part of your overall budget. These five "secrets" of buying a new computer are fairly common sense, but they are not always made clear up front. Knowing what to ask will help you in two ways. First, you can be sure you are getting the right computer for your needs. Second, if the salesperson or company that you're dealing with explains these things to you without being asked, you'll know you're dealing with someone who is honest and upfront. Knowing you can trust the people you're dealing with is an invaluable feature of your new computer system.
When getting start to great ASP Web Application Project you will be able to see any Default template which Microsoft already add for you to choose to help you to create your application quickly.
for ASP Web Application it come with any template below let understand to select for your application making purposes.
- MVC: Let’s start with this template, because it’s the one you’ll use the most if you start with MVC(For MVC5 you need to install Microsoft Visual Studio 2013)
- Empty: As you would expect, the empty template sets you up with an empty project skeleton.
- Web Forms: The Web Forms template sets you up for ASP.NET Web Forms development.
- Web API: This creates an application with both MVC and Web API support
- Azure Mobile Service: If you have Visual Studio 2013 Update 2 (also known as 2013.2)installed, you’ll see this additional option
- Single Page Application: The Single Page Application template sets you up for an application that’s primarily driven via JavaScript requests to Web API services rather than the traditional web page request / response cycle.
- Facebook: This template makes it easier to build a Facebook “Canvas” application, a web application that appears hosted inside of the Facebook website.
Building Website is not easy yet , to make your website become creative and security you need to follow along and develop in many language and especially code structure. in this post we will getting start to understand ASP Application with MVC which is a new way to buiding a website with ASP
ASP MVC has important 3 part is M , V and C
M stand for Model
V stand for View
C stand for Control
so let start to understand what is Model and What is View , What is Control
ASP MVC has important 3 part is M , V and C
M stand for Model
V stand for View
C stand for Control
so let start to understand what is Model and What is View , What is Control
- Model: this is where all the business logic of the application resides: it can range from a simple static class that returns a dataset to a complex multi-assembly Business Logic Layer that uses an assembly specific to the Data Access Layer.
- View: at the other end of the application is the View, which displays the application's user interface and contains the representation of the data that have been retrieved by the Model. This doesn't have logic, other than the one strictly related to the presentation of data.
- Controller: between the two components stands the Controller. It acts as the orchestrator of all the interactions among the other components and the users: it handles the requests, reads the form values, passes them to the Model, decides which View to render and finally sends the data to be rendered to the View.
Interaction between Model , View and Control
- The request comes from the client and hits the Controller.
- The Controller calls the Model in order to perform some "business" operations.
- The Model returns the results of the operations back to the Controller.
- The Controller decides which View needs to be rendered and sends it the data that must be rendered.
- Finally the View renders the output and sends the response back to the client.
Everything has their Evolution and develop from time to time like human , Mobile and even Computer in topic it time to understand the evolution of you computer that being used to read this article.
century. Wilhelm Schickhard, Blaise Pascal, and Gottfried Leibnitz were among
mathematicians who designed and implemented calculators that were capable of addition,
subtraction, multiplication, and division included The first multi-purpose or programmable
computing device was probably Charles Babbage's Difference Engine, which was begun in
1823 but never completed. In 1842, Babbage designed a more ambitious machine, called the
Analytical Engine but unfortunately it also was only partially completed. Babbage, together
with Ada Lovelace recognized several important programming techniques, including
conditional branches, iterative loops and index variables. Babbage designed the machine
which is arguably the first to be used in computational science. In 1933, George Scheutz and
his son, Edvard began work on a smaller version of the difference engine and by 1853 they
had constructed a machine that could process 15-digit numbers and calculate fourth-order
differences. The US Census Bureau was one of the first organizations to use the mechanical
computers which used punch-card equipment designed by Herman Hollerith to tabulate data
for the 1890 census. In 1911 Hollerith's company merged with a competitor to found the
corporation which in 1924 became International Business Machines (IBM).
electromechanical relays. The earliest attempt to build an electronic computer was by J. V.
Atanasoff, a professor of physics and mathematics at Iowa State in 1937. Atanasoff set out to
build a machine that would help his graduate students solve systems of partial differential
equations. By 1941 he and graduate student Clifford Berry had succeeded in building a
machine that could solve 29 simultaneous equations with 29 unknowns. However, the
machine was not programmable, and was more of an electronic calculator.
A second early electronic machine was Colossus, designed by Alan Turing for the British
military in 1943. The first general purpose programmable electronic computer was the
Electronic Numerical Integrator and Computer (ENIAC), built by J. Presper Eckert and John
V. Mauchly at the University of Pennsylvania. Research work began in 1943, funded by the
Army Ordinance Department, which needed a way to compute ballistics during World War
II. The machine was completed in 1945 and it was used extensively for calculations during
the design of the hydrogen bomb. Eckert, Mauchly, and John von Neumann, a consultant to
the ENIAC project, began work on a new machine before ENIAC was finished. The main
contribution of EDVAC, their new project, was the notion of a stored program. ENIAC was
controlled by a set of external switches and dials; to change the program required physically
altering the settings on these controls. EDVAC was able to run orders of magnitude faster
than ENIAC and by storing instructions in the same medium as data, designers could
concentrate on improving the internal structure of the machine without worrying about
matching it to the speed of an external control. Eckert and Mauchly later designed what was
arguably the first commercially successful computer, the UNIVAC; in 1952. Software
technology during this period was very primitive.
system design, ranging from the technology used to build the basic circuits to the
programming languages used to write scientific applications. Electronic switches in this era
were based on discrete diode and transistor technology with a switching time of
approximately 0.3 microseconds. The first machines to be built with this technology include
TRADIC at Bell Laboratories in 1954 and TX-0 at MIT's Lincoln Laboratory. Index
registers were designed for controlling loops and floating point units for calculations based
on real numbers.
A number of high level programming languages were introduced and these include
FORTRAN (1956), ALGOL (1958), and COBOL (1959). Important commercial machines of
this era include the IBM 704 and its successors, the 709 and 7094. In the 1950s the first two
supercomputers were designed specifically for numeric processing in scientific applications.
(semiconductor devices with several transistors built into one physical component),
semiconductor memories, microprogramming as a technique for efficiently designing
complex processors and the introduction of operating systems and time-sharing. The first ICs
were based on small-scale integration (SSI) circuits, which had around 10 devices per circuit
(or ‘chip’), and evolved to the use of medium-scale integrated (MSI) circuits, which had up to
100 devices per chip. Multilayered printed circuits were developed and core memory was
replaced by faster, solid state memories.
In 1964, Seymour Cray developed the CDC 6600, which was the first architecture to use
functional parallelism. By using 10 separate functional units that could operate
simultaneously and 32 independent memory banks, the CDC 6600 was able to attain a
computation rate of one million floating point operations per second (Mflops). Five years
later CDC released the 7600, also developed by Seymour Cray. The CDC 7600, with its
pipelined functional units, is considered to be the first vector processor and was capable of
executing at ten Mflops. The IBM 360/91, released during the same period, was roughly
twice as fast as the CDC 660.
Early in this third generation, Cambridge University and the University of London
cooperated in the development of CPL (Combined Programming Language, 1963). CPL was,
according to its authors, an attempt to capture only the important features of the complicated
and sophisticated ALGOL. However, like ALGOL, CPL was large with many features that
were hard to learn. In an attempt at further simplification, Martin Richards of Cambridge
developed a subset of CPL called BCPL (Basic Computer Programming Language, 1967). In
1970 Ken Thompson of Bell Labs developed yet another simplification of CPL called simply
B, in connection with an early implementation of the UNIX operating system. comment):
100,000 devices per chip) were used in the construction of the fourth generation computers.
Whole processors could now fit onto a single chip, and for simple systems the entire
computer (processor, main memory, and I/O controllers) could fit on one chip. Gate delays
dropped to about 1ns per gate. Core memories were replaced by semiconductor memories.
Large main memories like CRAY 2 began to replace the older high speed vector processors,
such as the CRAY 1, CRAY X-MP and CYBER
In 1972, Dennis Ritchie developed the C language from the design of the CPL and
Thompson's B. Thompson and Ritchie then used C to write a version of UNIX for the DEC
PDP-11. Other developments in software include very high level languages such as FP
(functional programming) and Prolog (programming in logic).
IBM worked with Microsoft during the 1980s to start what we can really call PC (Personal
Computer) life today. IBM PC was introduced in October 1981 and it worked with the
operating system (software) called ‘Microsoft Disk Operating System (MS DOS) 1.0.
Development of MS DOS began in October 1980 when IBM began searching the market for
an operating system for the then proposed IBM PC and major contributors were Bill Gates,
Paul Allen and Tim Paterson. In 1983, the Microsoft Windows was announced and this has
witnessed several improvements and revision over the last twenty years.
could all be working on different parts of a single program. The scale of integration in
semiconductors continued at a great pace and by 1990 it was possible to build chips with a
million components - and semiconductor memories became standard on all computers.
Computer networks and single-user workstations also became popular.
Parallel processing started in this generation. The Sequent Balance 8000 connected up to 20
processors to a single shared memory module though each processor had its own local cache.
The machine was designed to compete with the DEC VAX-780 as a general purpose Unix
system, with each processor working on a different user's job. However Sequent provided a
library of subroutines that would allow programmers to write programs that would use more
than one processor, and the machine was widely used to explore parallel algorithms and
programming techniques. The Intel iPSC-1, also known as ‘the hypercube’ connected each
processor to its own memory and used a network interface to connect processors. This
distributed memory architecture meant memory was no longer a problem and large systems
with more processors (as many as 128) could be built. Also introduced was a machine,
known as a data-parallel or SIMD where there were several thousand very simple processors
which work under the direction of a single control unit. Both wide area network (WAN) and
local area network (LAN) technology developed rapidly.
changes but have been gradual improvements over established systems. This generation
brought about gains in parallel computing in both the hardware and in improved
understanding of how to develop algorithms to exploit parallel architectures.
Workstation technology continued to improve, with processor designs now using a
combination of RISC, pipelining, and parallel processing. Wide area networks, network
bandwidth and speed of operation and networking capabilities have kept developing
tremendously. Personal computers (PCs) now operate with Gigabit per second processors,
multi-Gigabyte disks, hundreds of Mbytes of RAM, colour printers, high-resolution graphic
monitors, stereo sound cards and graphical user interfaces. Thousands of software (operating
systems and application software) are existing today and Microsoft Inc. has been a major
contributor. Microsoft is said to be one of the biggest companies ever, and its chairman –
Bill Gates has been rated as the richest man for several years.
Finally, this generation has brought about micro controller technology. Micro controllers are
’embedded’ inside some other devices (often consumer products) so that they can control the
features or actions of the product. They work as small computers inside devices and now
serve as essential components in most machines.
![]() |
image source:google images |
The Mechanical Era (1623-1945)
Trying to use machines to solve mathematical problems can be traced to the early 17thcentury. Wilhelm Schickhard, Blaise Pascal, and Gottfried Leibnitz were among
mathematicians who designed and implemented calculators that were capable of addition,
subtraction, multiplication, and division included The first multi-purpose or programmable
computing device was probably Charles Babbage's Difference Engine, which was begun in
1823 but never completed. In 1842, Babbage designed a more ambitious machine, called the
Analytical Engine but unfortunately it also was only partially completed. Babbage, together
with Ada Lovelace recognized several important programming techniques, including
conditional branches, iterative loops and index variables. Babbage designed the machine
which is arguably the first to be used in computational science. In 1933, George Scheutz and
his son, Edvard began work on a smaller version of the difference engine and by 1853 they
had constructed a machine that could process 15-digit numbers and calculate fourth-order
differences. The US Census Bureau was one of the first organizations to use the mechanical
computers which used punch-card equipment designed by Herman Hollerith to tabulate data
for the 1890 census. In 1911 Hollerith's company merged with a competitor to found the
corporation which in 1924 became International Business Machines (IBM).
First Generation Electronic Computers (1937-1953)
These devices used electronic switches, in the form of vacuum tubes, instead ofelectromechanical relays. The earliest attempt to build an electronic computer was by J. V.
Atanasoff, a professor of physics and mathematics at Iowa State in 1937. Atanasoff set out to
build a machine that would help his graduate students solve systems of partial differential
equations. By 1941 he and graduate student Clifford Berry had succeeded in building a
machine that could solve 29 simultaneous equations with 29 unknowns. However, the
machine was not programmable, and was more of an electronic calculator.
A second early electronic machine was Colossus, designed by Alan Turing for the British
military in 1943. The first general purpose programmable electronic computer was the
Electronic Numerical Integrator and Computer (ENIAC), built by J. Presper Eckert and John
V. Mauchly at the University of Pennsylvania. Research work began in 1943, funded by the
Army Ordinance Department, which needed a way to compute ballistics during World War
II. The machine was completed in 1945 and it was used extensively for calculations during
the design of the hydrogen bomb. Eckert, Mauchly, and John von Neumann, a consultant to
the ENIAC project, began work on a new machine before ENIAC was finished. The main
contribution of EDVAC, their new project, was the notion of a stored program. ENIAC was
controlled by a set of external switches and dials; to change the program required physically
altering the settings on these controls. EDVAC was able to run orders of magnitude faster
than ENIAC and by storing instructions in the same medium as data, designers could
concentrate on improving the internal structure of the machine without worrying about
matching it to the speed of an external control. Eckert and Mauchly later designed what was
arguably the first commercially successful computer, the UNIVAC; in 1952. Software
technology during this period was very primitive.
Second Generation (1954-1962)
The second generation witnessed several important developments at all levels of computersystem design, ranging from the technology used to build the basic circuits to the
programming languages used to write scientific applications. Electronic switches in this era
were based on discrete diode and transistor technology with a switching time of
approximately 0.3 microseconds. The first machines to be built with this technology include
TRADIC at Bell Laboratories in 1954 and TX-0 at MIT's Lincoln Laboratory. Index
registers were designed for controlling loops and floating point units for calculations based
on real numbers.
A number of high level programming languages were introduced and these include
FORTRAN (1956), ALGOL (1958), and COBOL (1959). Important commercial machines of
this era include the IBM 704 and its successors, the 709 and 7094. In the 1950s the first two
supercomputers were designed specifically for numeric processing in scientific applications.
Third Generation (1963-1972)
Technology changes in this generation include the use of integrated circuits, or ICs(semiconductor devices with several transistors built into one physical component),
semiconductor memories, microprogramming as a technique for efficiently designing
complex processors and the introduction of operating systems and time-sharing. The first ICs
were based on small-scale integration (SSI) circuits, which had around 10 devices per circuit
(or ‘chip’), and evolved to the use of medium-scale integrated (MSI) circuits, which had up to
100 devices per chip. Multilayered printed circuits were developed and core memory was
replaced by faster, solid state memories.
In 1964, Seymour Cray developed the CDC 6600, which was the first architecture to use
functional parallelism. By using 10 separate functional units that could operate
simultaneously and 32 independent memory banks, the CDC 6600 was able to attain a
computation rate of one million floating point operations per second (Mflops). Five years
later CDC released the 7600, also developed by Seymour Cray. The CDC 7600, with its
pipelined functional units, is considered to be the first vector processor and was capable of
executing at ten Mflops. The IBM 360/91, released during the same period, was roughly
twice as fast as the CDC 660.
Early in this third generation, Cambridge University and the University of London
cooperated in the development of CPL (Combined Programming Language, 1963). CPL was,
according to its authors, an attempt to capture only the important features of the complicated
and sophisticated ALGOL. However, like ALGOL, CPL was large with many features that
were hard to learn. In an attempt at further simplification, Martin Richards of Cambridge
developed a subset of CPL called BCPL (Basic Computer Programming Language, 1967). In
1970 Ken Thompson of Bell Labs developed yet another simplification of CPL called simply
B, in connection with an early implementation of the UNIX operating system. comment):
Fourth Generation (1972-1984)
Large scale integration (LSI - 1000 devices per chip) and very large scale integration (VLSI -100,000 devices per chip) were used in the construction of the fourth generation computers.
Whole processors could now fit onto a single chip, and for simple systems the entire
computer (processor, main memory, and I/O controllers) could fit on one chip. Gate delays
dropped to about 1ns per gate. Core memories were replaced by semiconductor memories.
Large main memories like CRAY 2 began to replace the older high speed vector processors,
such as the CRAY 1, CRAY X-MP and CYBER
In 1972, Dennis Ritchie developed the C language from the design of the CPL and
Thompson's B. Thompson and Ritchie then used C to write a version of UNIX for the DEC
PDP-11. Other developments in software include very high level languages such as FP
(functional programming) and Prolog (programming in logic).
IBM worked with Microsoft during the 1980s to start what we can really call PC (Personal
Computer) life today. IBM PC was introduced in October 1981 and it worked with the
operating system (software) called ‘Microsoft Disk Operating System (MS DOS) 1.0.
Development of MS DOS began in October 1980 when IBM began searching the market for
an operating system for the then proposed IBM PC and major contributors were Bill Gates,
Paul Allen and Tim Paterson. In 1983, the Microsoft Windows was announced and this has
witnessed several improvements and revision over the last twenty years.
Fifth Generation (1984-1990)
This generation brought about the introduction of machines with hundreds of processors thatcould all be working on different parts of a single program. The scale of integration in
semiconductors continued at a great pace and by 1990 it was possible to build chips with a
million components - and semiconductor memories became standard on all computers.
Computer networks and single-user workstations also became popular.
Parallel processing started in this generation. The Sequent Balance 8000 connected up to 20
processors to a single shared memory module though each processor had its own local cache.
The machine was designed to compete with the DEC VAX-780 as a general purpose Unix
system, with each processor working on a different user's job. However Sequent provided a
library of subroutines that would allow programmers to write programs that would use more
than one processor, and the machine was widely used to explore parallel algorithms and
programming techniques. The Intel iPSC-1, also known as ‘the hypercube’ connected each
processor to its own memory and used a network interface to connect processors. This
distributed memory architecture meant memory was no longer a problem and large systems
with more processors (as many as 128) could be built. Also introduced was a machine,
known as a data-parallel or SIMD where there were several thousand very simple processors
which work under the direction of a single control unit. Both wide area network (WAN) and
local area network (LAN) technology developed rapidly.
Sixth Generation (1990 - )
Most of the developments in computer systems since 1990 have not been fundamentalchanges but have been gradual improvements over established systems. This generation
brought about gains in parallel computing in both the hardware and in improved
understanding of how to develop algorithms to exploit parallel architectures.
Workstation technology continued to improve, with processor designs now using a
combination of RISC, pipelining, and parallel processing. Wide area networks, network
bandwidth and speed of operation and networking capabilities have kept developing
tremendously. Personal computers (PCs) now operate with Gigabit per second processors,
multi-Gigabyte disks, hundreds of Mbytes of RAM, colour printers, high-resolution graphic
monitors, stereo sound cards and graphical user interfaces. Thousands of software (operating
systems and application software) are existing today and Microsoft Inc. has been a major
contributor. Microsoft is said to be one of the biggest companies ever, and its chairman –
Bill Gates has been rated as the richest man for several years.
Finally, this generation has brought about micro controller technology. Micro controllers are
’embedded’ inside some other devices (often consumer products) so that they can control the
features or actions of the product. They work as small computers inside devices and now
serve as essential components in most machines.
Along the way to describe about computer computer is devided into 3 type : is Analog Computers , Digital Computer and Hybrid Computer.
so let see the explanation about theme below
I .ANALOG COMPUTERS
Analog computers were well known in the 1940s although they are now uncommon. In such
machines, numbers to be used in some calculation were represented by physical quantities -
such as electrical voltages. According to the Penguin Dictionary of Computers (1970), “an
analog computer must be able to accept inputs which vary with respect to time and directly
7
apply these inputs to various devices within the computer which performs the computing
operations of additions, subtraction, multiplication, division, integration and function
generation….” The computing units of analog computers respond immediately to the
changes which they detect in the input variables. Analog computers excel in solving
differential equations and are faster than digital computers.
II .DIGITAL COMPUTERS
Most computers today are digital. They represent information discretely and use a binary
(two-step) system that represents each piece of information as a series of zeroes and ones.
The Pocket Webster School & Office Dictionary (1990) simply defines Digital computers as
“a computer using numbers in calculating.” Digital computers manipulate most data more
easily than analog computers. They are designed to process data in numerical form and their
circuits perform directly the mathematical operations of addition, subtraction, multiplication,
and division. Because digital information is discrete, it can be copied exactly but it is
difficult to make exact copies of analog information.
III. HYBRID COMPUTERS
These are machines that can work as both analog and digital computers.
I hop you guy enjoy this reading and get any useful information. if you have anything to feedback please let us know through your comment.
Life is the Long road to travel sometime you meet the bad and difficult to travel road and sometime you did with the smoothly road. your life is not easy to live but your life is love to live if you make true love to theme and never give up yourself onto the darkness and laziness. watch all of this videos to make your life better. Ask yourself everyday with those question.
In the last Post we already known about Basic Computer Component in this postc we will getting to understand Computer that Classified by their Capacity.
generally made of semiconductors fabricated on silicon chips. Large-scale production of
silicon chips began in 1971 and this has been of great use in the production of
microcomputers. The microcomputer is a digital computer system that is controlled by a
stored program that uses a microprocessor, a programmable read-only memory (ROM) and a
random-access memory (RAM). The ROM defines the instructions to be executed by the
computer while RAM is the functional equivalent of computer memory.
The Apple IIe, the Radio Shack TRS-80, and the Genie III are examples of microcomputers
and are essentially fourth generation devices. Microcomputers have from 4k to 64k storage
location and are capable of handling small, single-business application such as sales analysis,
inventory, billing and payroll.
manufacture of the minicomputer, to handle tasks that large computers could not perform
economically. Minicomputer systems provide faster operating speeds and larger storage
capacities than microcomputer systems. Operating systems developed for minicomputer
systems generally support both multiprogramming and virtual storage. This means that many
programs can be run concurrently. This type of computer system is very flexible and can be
expanded to meet the needs of users.
5
Minicomputers usually have from 8k to 256k memory storage location, and a relatively
established application software. The PDP-8, the IBM systems 3 and the Honeywell 200 and
1200 computer are typical examples of minicomputers.
Micro Computer
The Microcomputer has the lowest level capacity. The machine has memories that aregenerally made of semiconductors fabricated on silicon chips. Large-scale production of
silicon chips began in 1971 and this has been of great use in the production of
microcomputers. The microcomputer is a digital computer system that is controlled by a
stored program that uses a microprocessor, a programmable read-only memory (ROM) and a
random-access memory (RAM). The ROM defines the instructions to be executed by the
computer while RAM is the functional equivalent of computer memory.
The Apple IIe, the Radio Shack TRS-80, and the Genie III are examples of microcomputers
and are essentially fourth generation devices. Microcomputers have from 4k to 64k storage
location and are capable of handling small, single-business application such as sales analysis,
inventory, billing and payroll.
Mini Computer
In the 1960s, the growing demand for a smaller stand-alone machine brought about themanufacture of the minicomputer, to handle tasks that large computers could not perform
economically. Minicomputer systems provide faster operating speeds and larger storage
capacities than microcomputer systems. Operating systems developed for minicomputer
systems generally support both multiprogramming and virtual storage. This means that many
programs can be run concurrently. This type of computer system is very flexible and can be
expanded to meet the needs of users.
5
Minicomputers usually have from 8k to 256k memory storage location, and a relatively
established application software. The PDP-8, the IBM systems 3 and the Honeywell 200 and
1200 computer are typical examples of minicomputers.
Medium Size Computer
Medium-size computer systems provide faster operating speeds and larger storage capacities
than mini computer systems. They can support a large number of high-speed input/output
devices and several disk drives can be used to provide online access to large data files as
required for direct access processing and their operating systems also support both
multiprogramming and virtual storage. This allows the running of variety of programs
concurrently. A medium-size computer can support a management information system and
can therefore serve the needs of a large bank, insurance company or university. They usually
have memory sizes ranging from 32k to 512k. The IBM System 370, Burroughs 3500
System and NCR Century 200 system are examples of medium-size computers.
Large Computer
Large computers are next to Super Computers and have bigger capacity than the Medium-
size computers. They usually contain full control systems with minimal operator
intervention. Large computer system ranges from single-processing configurations to
nationwide computer-based networks involving general large computers.
Large computers have storage capacities from 512k to 8192k, and these computers have internal operating speeds measured in terms of nanosecond, as compared to small computers where speed is
Large computers have storage capacities from 512k to 8192k, and these computers have internal operating speeds measured in terms of nanosecond, as compared to small computers where speed is
measured in terms of microseconds.
Expandability to 8 or even 16 million characters is possible with some of these systems. Such characteristics permit many data processing jobs to be accomplished concurrently.
Expandability to 8 or even 16 million characters is possible with some of these systems. Such characteristics permit many data processing jobs to be accomplished concurrently.
Large computers are usually used in government agencies, large corporations and computer
services organizations. They are used in complex modeling, or simulation, business
operations, product testing, design and engineering work and in the development of space
technology. Large computers can serve as server systems where many smaller computers can
be connected to it to form a communication network.
Super Computer
The supercomputers are the biggest and fastest machines today and they are used when
billion or even trillions of calculations are required. These machines are applied in nuclear
weapon development, accurate weather forecasting and as host processors for local computer.
and time sharing networks. Super computers have capabilities far beyond even the traditional
large-scale systems. Their speed ranges from 100 million-instruction-per-second to well over
three billion. Because of their size, supercomputers sacrifice a certain amount of flexibility.
They are therefore not ideal for providing a variety of user services. For this reason,
supercomputers may need the assistance of a medium-size general purpose machines (usually
called front-end processor) to handle minor programs or perform slower speed or smaller
volume operation.
Everyday We are surround by Modern Technology . Computer is one of Theme which is used daily. here is the part of computer you need to know.
CPU ( Central Processing Unit)
This box is the brain of a computer system. It processes, stores, and communicates information. Wires connect your CPU to your monitor and other devices. Computers are somewhat similar to people. They have memories just like us. The memory on a computer is stored in data on disks. Disks look like small heavy old style records. Disks function similarly like records. As the disk spins inside the computer, the data on the disk is accessed. The programs that you use (such as word-processing) and the program that runs your computer (the operating system) are stored on the CPU’s hard disk.
Monitor
This part of the computer system that visually communicates with the user. It is somewhat like a television. Almost all information communicated from the computer to the user is through the monitor. (The monitor is also referred to as “the screen”)
Keyboard :
The keyboard is an important tool that allows a user to communicate with the computer. It is composed of “keys” that send a signal to the computer that the computer recognizes and uses to carry out processes and programs. Keyboards come in various shapes and sizes, but serve generally the same purpose. We’ll go over the specific keys in another part of this guide.
Mouse :
Similar to the keyboard, the mouse is used to communicate with the computer. The mouse is like a remote control to a TV—It is a tool that drives the computer that can be used “away from the computer;” though the mouse is considered your direct connection into the computer world. We’ll go over how to use the mouse later in the guide.
CPU ( Central Processing Unit)

Monitor
This part of the computer system that visually communicates with the user. It is somewhat like a television. Almost all information communicated from the computer to the user is through the monitor. (The monitor is also referred to as “the screen”)
Keyboard :
The keyboard is an important tool that allows a user to communicate with the computer. It is composed of “keys” that send a signal to the computer that the computer recognizes and uses to carry out processes and programs. Keyboards come in various shapes and sizes, but serve generally the same purpose. We’ll go over the specific keys in another part of this guide.
Mouse :
Similar to the keyboard, the mouse is used to communicate with the computer. The mouse is like a remote control to a TV—It is a tool that drives the computer that can be used “away from the computer;” though the mouse is considered your direct connection into the computer world. We’ll go over how to use the mouse later in the guide.