Sabtu, 25 Agustus 2007

Hyper-Threading

Intel Hyper-Threading Technology merupakan sebuah teknologi mikroprosesor yang diciptakan oleh Intel Corporation pada beberapa prosesor dengan arsitektur Intel NetBurst dan Core, semacam Intel Pentium 4, Pentium D, Xeon, dan Core 2. Teknologi ini diperkenalkan pada bulan Maret 2002 dan mulanya hanya diperkenalkan pada prosesor Xeon (Prestonia).

Prosesor dengan teknologi ini akan dilihat oleh sistem operasi yang mendukung banyak prosesor seperti Windows NT, Windows 2000, Windows XP Professional, Windows Vista, dan GNU/Linux sebagai dua buah prosesor, meski secara fisik hanya tersedia satu prosesor. Dengan dua buah prosesor dikenali oleh sistem operasi, maka kerja sistem dalam melakukan eksekusi setiap thread pun akan lebih efisien, karena meskipun sistem-sistem operasi tersebut bersifat multitasking, sistem-sistem operasi tersebut melakukan eksekusi terhadap proses secara sekuensial (berurutan), dengan sebuah algoritma antrean yang disebut dengan dispatching algorithm.

Kebutuhan sistem Hyper-Threading

Sebuah prosesor yang mendukung teknologi Hyper-Threading membutuhkan beberapa komponen berikut ini:


Sumber:
http://id.wikipedia.org/wiki/Hyper-Threading

Cara Kerja CPU

Fungsi CPU

CPU berfungsi sebagaimana layaknya kalkulator, hanya saja CPU jauh lebih kuat daya pemrosesannya. Fungsi utama dari CPU adalah melakukan operasi aritmetika dan logika terhadap data yang diambil dari memori atau dari informasi yang dimasukkan melalui beberapa perangkat keras, seperti halnya keyboard, scanner, joystick, atau mouse. CPU dikontrol dengan menggunakan sekumpulan instruksi perangkat lunak, yang disebut sebagai program komputer. Perangkat lunak tersebut dapat dijalankan oleh CPU dengan membacanya dari perangkat media penyimpanan, seperti halnya hard disk, floppy disk, CD-ROM atau tape magnetik. Instruksi-instruksi tersebut kemudian disimpan terlebih dahulu ke dalam memori fisik (RAM), di mana setiap instruksi akan diberikan alamat yang unik yang disebut sebagai alamat memori. Selanjutnya, CPU dapat mengakses data-data dalam RAM dengan menentukan alamat data yang ia mau.

Selagi sebuah program dieksekusi, data mengalir dari RAM ke sebuah unit yang disebut dengan bus, yang menghubunkan antara CPU dengan RAM. Data kemudian di-decode dengan menggunakan unit pemroses yang disebut sebagai Instruction Decoder yang menerjemahkan instruksi-instruksi. Dari instruction decoder, data kemudian berjalan ke unit aritmetika dan logika yang melakukan kalkulasi dan perbandingan. Data dapat disimpan secara sementara oleh unit aritmetika dan logika dalam sebuah lokasi memori yang disebut dengan register, agar dapat diambil kembali dengan cepat. Unit aritmetika dan logika dapat melakukan operasi-operasi tertentu, meliputi penjumlahan, perkalian, pengurangan, pengujian kondisi terhadap data dalam register, hingga mengirimkan hasil pemrosesannya kembali ke memori fisik atau media penyimpanan lainnya (register juga bisa, jika memang hendak menggunakan hasil pemrosesan tersebut kembali). Selama proses ini terjadi, sebuah unit dalam CPU yang disebut dengan Program Counter akan memantau instruksi-instruksi yang suskes dijalankan agar instruksi-instruksi tersebut dieksekusi dengan urutan yang benar.

Percabangan instruksi

Program counter dalam CPU umumnya bergerak secara sekuens (berurutan). Meskipun demikian, beberapa instruksi dalam CPU, yang disebut dengan Jump instruction, mengizinkan CPU dapat berpindah ke sebuah instruksi yang terletak bukan pada urutannya. Hal ini disebut juga dengan percabangan instruksi (branching instruction). Cabang-cabang instruksi tersebut dapat berupa cabang yang bersifat kondisional (ada syarat tertentu) atau tidak kondisional. Sebuah cabang yang bersifat tidak kondisional selalu berpindah ke sebuah instruksi baru yang berada di luar aliran instruksi, sementara sebuah cabang yang bersifat kondisional akan menguji terlebih dahulu hasil dari operasi sebelumnya untuk melihat apakah cabang instruksi tersebut akan dieksekusi atau tidak. Data yang diuji untuk percabangan instruksi disimpan dalam lokasi dalam CPU yang disebut dengan Flag.

Bilangan yang dapat ditangani

Kebanyakan CPU dapat menangani dua jenis bilangan, yakni fixed-point dan floating-point (bilangan titik mengambang). Bilangan fixed-point memiliki nilai digit spesifik pada salah satu titik desimalnya. Hal ini memang membatasi jangkauan nilai yang mungkin untuk angka-angka tersebut, tapi hal ini justru dapat dihitung oleh CPU secara lebih cepat. Sementara itu, bilangan floating-point merupakan bilangan yang diekspresikan dalam notasi ilmiah, di mana sebuah angka direpresentasikan sebagai angka desimal yang dikalikan dengan pangkat 10 (seperti 3.14 x 1057). Notasi ilmiah seperti ini merupakan cara yang singkat untuk mengekspresikan bilangan yang sangat besar atau bilangan yang sangat kecil, dan juga mengizinkan jangkauan nilai yang sangat jauh sebelum dan sesudah titik desimalnya. Bilangan ini umumnya digunakan dalam merepresentasikan grafik dan kerja ilmiah, tapi aritmetika terhadap bilangan floating-point jauh lebih rumit dan dapat diselesaikan dalam waktu yang lebih lama oleh CPU, karena mungkin dapat menggunakan beberapa siklus detak CPU. Beberapa komputer menggunakan sebuah prosesor sendiri untuk menghitung bilangan floating-point, yang disebut dengan math co-processor yang dapat bekerja secara paralel dengan CPU untuk mempercepat penghitungan bilangan floating-point. Math co-processor saat ini menjadi standar dalam banyak komputer, karena memang aplikasi saat ini banyak beroperasi dengan bilangan titik mengambang.

Sumber:
http://id.wikipedia.org/wiki/CPU

CPU

Sebuah CPU (singkatan dari central processing unit) menunjuk ke bagian dari perangkat keras komputer yang memahami dan melaksanakan instruksi dan data yang terdapat dalam perangkat lunak. Istilah yang lebih umum prosesor kadangkala digunakan untuk menunjuk ke CPU. Mikroprosesor adalah CPU yang diproduksi dalam sirkuit terpadu, seringkali dalam sebuah paket chip-tunggal. Sejak pertengahan 1970-an, mikroprosesor chip-tunggal ini telah menjadi umum dan penting dalam implementasi CPU.

Komponen CPU

Komponen CPU dibagi menjadi beberapa macam, yakni sebagai berikut:

  • Unit kontrol, yang mampu mengarahkan aliran program. Komponen ini pasti terdapat dalam semua CPU.
  • Unit eksekusi yang mampu melakukan operasi terhadap data, dan memiliki beberapa bagian seperti unit logika dan aritematika atau ALU (Arithmetic and Logical Unit) , unit titik mengambang (Floating Point Unit) dan lainnya. Komponen ini pasti terdapat dalam semua jenis CPU.
  • Sekumpulan register yang dapat digunakan untuk menampung operand dan hasil perhitungan yang belum selesai dengan sempurna. Komponen ini kadang-kadang terdapat dalam CPU, tapi beberapa tidak memilikinya.
  • Memori internal CPU, yang dapat berupa cache. Komponen ini kadang-kadang terdapat dalam CPU, tapi banyak CPU tidak memilikinya, khususnya CPU-CPU lama.

Diagram blok dari CPU sederhana


Sumber:
http://id.wikipedia.org/wiki/CPU

Arsitektur Komputer

Dalam bidang teknik komputer, arsitektur komputer adalah konsep perencanaan dan struktur pengoperasian dasar dari suatu sistem komputer. Arsitektur komputer ini merupakan rencana cetak-biru dan deskripsi fungsional dari kebutuhan bagian perangkat keras yang didesain (kecepatan proses dan sistem interkoneksinya). Dalam hal ini, implementasi perencanaan dari masing–masing bagian akan lebih difokuskan terutama, mengenai bagaimana CPU akan bekerja, dan mengenai cara pengaksesan data dan alamat dari dan ke memori cache, RAM, ROM, cakram keras, dll). Beberapa contoh dari arsitektur komputer ini adalah arsitektur von Neumann, CISC, RISC, blue Gene, dll.

Arsitektur komputer juga dapat didefinisikan dan dikategorikan sebagai ilmu dan sekaligus seni mengenai cara interkoneksi komponen-komponen perangkat keras untuk dapat menciptakan sebuah komputer yang memenuhi kebutuhan fungsional, kinerja, dan target biayanya.

Arsitektur komputer ini paling tidak mengandung 3 sub-kategori:

Sumber:
http://id.wikipedia.org/wiki/Arsitektur_komputer

Computer architecture (3)

Historical perspective

Early usage in computer context

The term “architecture” in computer literature can be traced to the work of Lyle R. Johnson and Frederick P. Brooks, Jr., members in 1959 of the Machine Organization department in IBM’s main research center. Johnson had occasion to write a proprietary research communication about Stretch, an IBM-developed supercomputer for Los Alamos Scientific Laboratory; in attempting to characterize his chosen level of detail for discussing the luxuriously embellished computer, he noted that his description of formats, instruction types, hardware parameters, and speed enhancements aimed at the level of “system architecture” – a term that seemed more useful than “machine organization.” Subsequently Brooks, one of the Stretch designers, started Chapter 2 of a book (Planning a Computer System: Project Stretch, ed. W. Buchholz, 1962) by writing, “Computer architecture, like other architecture, is the art of determining the needs of the user of a structure and then designing to meet those needs as effectively as possible within economic and technological constraints.” Brooks went on to play a major role in the development of the IBM System/360 line of computers, where “architecture” gained currency as a noun with the definition “what the user needs to know.” Later the computer world would employ the term in many less-explicit ways.

The first mention of the term architecture in the refereed computer literature is in a 1964 article describing the IBM System/360. [3] The article defines architecture as the set of “attributes of a system as seen by the programmer, i.e., the conceptual structure and functional behavior, as distinct from the organization of the data flow and controls, the logical design, and the physical implementation.” In the definition, the programmer perspective of the computer’s functional behavior is key. The conceptual structure part of an architecture description makes the functional behavior comprehensible, and extrapolatable to a range of use cases. Only later on did ‘internals’ such as “the way by which the CPU performs internally and accesses addresses in memory,” mentioned above, slip into the definition of computer architecture.


Sumber:
http://en.wikipedia.org/wiki/Computer_architecture

Computer architecture (2)

Design goals

The exact form of a computer system depends on the constraints and goals for which it was optimized. Computer architectures usually trade off standards, cost, memory capacity, latency and throughput. Sometimes other considerations, such as features, size, weight, reliability, expandability and power consumption are factors as well.

The most common scheme carefully chooses the bottleneck that most reduces the computer's speed. Ideally, the cost is allocated proportionally to assure that the data rate is nearly the same for all parts of the computer, with the most costly part being the slowest. This is how skillful commercial integrators optimize personal computers.

Cost

Generally cost is held constant, determined by either system or commercial requirements.

Performance

Computer performance is often described in terms of clock speed (usually in MHz or GHz). This refers to the cycles per second of the main clock of the CPU. However, this metric is somewhat misleading, as a machine with a higher clock rate may not necessarily have higher performance. As a result manufacturers have moved away from clock speed as a measure of performance. Computer performance can also be measured with the amount of cache a processor contains. If the speed, MHz or GHz, were to be a car then the cache is the traffic light. No matter how fast the car goes it still will not hit that green traffic light. The more speed you have and the more cache you have the faster your processor is.

Modern CPUs can execute multiple instructions per clock cycle, which dramatically speeds up a program. Other factors influence speed, such as the mix of functional units, bus speeds, available memory, and the type and order of instructions in the programs being run.

There are two main types of speed, latency and throughput. Latency is the time between the start of a process and its completion. Throughput is the amount of work done per unit time. Interrupt latency is the guaranteed maximum response time of the system to an electronic event (e.g. when the disk drive finishes moving some data). Performance is affected by a very wide range of design choices — for example, adding cache usually makes latency worse (slower) but makes throughput better. Computers that control machinery usually need low interrupt latencies. These computers operate in a real-time environment and fail if an operation is not completed in a specified amount of time. For example, computer-controlled anti-lock brakes must begin braking almost immediately after they have been instructed to brake.

The performance of a computer can be measured using other metrics, depending upon its application domain. A system may be CPU bound (as in numerical calculation), I/O bound (as in a webserving application) or memory bound (as in video editing). Power consumption has become important in servers and portable devices like laptops.

Benchmarking tries to take all these factors into account by measuring the time a computer takes to run through a series of test programs. Although benchmarking shows strengths, it may not help one to choose a computer. Often the measured machines split on different measures. For example, one system might handle scientific applications quickly, while another might play popular video games more smoothly. Furthermore, designers have been known to add special features to their products, whether in hardware or software, which permit a specific benchmark to execute quickly but which do not offer similar advantages to other, more general tasks.

Power consumption

Power consumption is another design criteria that factors in the design of modern computers. Power efficiency can often be traded for performance or cost benefits. With the increasing power density of modern circuits as the number of transistors per chip scales (Moore's Law), power efficiency has increased in importance. Recent processor designs such as the Intel Core 2 put more emphasis on increasing power efficiency. Also, in the world of embedded computing, power efficiency has long been and remains the primary design goal next to performance.


Sumber:
http://en.wikipedia.org/wiki/Computer_architecture

Computer architecture (1)

In computer engineering, computer architecture is the conceptual design and fundamental operational structure of a computer system. It is a blueprint and functional description of requirements (especially speeds and interconnections) and design implementations for the various parts of a computer — focusing largely on the way by which the central processing unit (CPU) performs internally and accesses addresses in memory.

It may also be defined as the science and art of selecting and interconnecting hardware components to create computers that meet functional, performance and cost goals.

Computer architecture comprises at least three main subcategories [1]

  • Microarchitecture, also known as Computer organization is a lower level, more concrete, description of the system that involves how the constituent parts of the system are interconnected and how they interoperate in order to implement the ISA[2]. The size of a computer's cache for instance, is an organizational issue that generally has nothing to do with the ISA.
  • System Design which includes all of the other hardware components within a computing system such as:
  1. system interconnects such as computer buses and switches
  2. memory controllers and hierarchies
  3. CPU off-load mechanisms such as direct memory access
  4. issues like multi-processing.

Once both ISA and microarchitecture has been specified, the actual device needs to be designed into hardware. This design process is often called implementation. Implementation is usually not considered architectural definition, but rather hardware design engineering.

Implementation can be further broken down into three pieces:

  • Logic Implementation/Design - where the blocks that were defined in the microarchitecture are implemented as logic equations.
  • Circuit Implementation/Design - where speed critical blocks or logic equations or logic gates are implemented at the transistor level.
  • Physical Implementation/Design - where the circuits are drawn out, the different circuit components are placed in a chip floor-plan or on a board and the wires connecting them are routed.

For CPUs, the entire implementation process is often called CPU design.

More specific usages of the term include more general wider-scale hardware architectures, such as cluster computing and Non-Uniform Memory Access (NUMA) architectures.

A typical vision of a computer architecture as a series of abstraction layers: hardware, firmware, assembler, kernel, operating system and applications (see also Tanenbaum 79).


Sumber:
http://en.wikipedia.org/wiki/Computer_architecture

von Neumann architecture (2)

First designs

The term "von Neumann architecture" arose from mathematician John von Neumann's paper, First Draft of a Report on the EDVAC.[2] Dated June 30, 1945, it was an early written account of a general purpose stored-program computing machine (the EDVAC). However, while von Neumann's work was pioneering, the term von Neumann architecture does somewhat of an injustice to von Neumann's collaborators, contemporaries, and predecessors.

A patent application of Konrad Zuse mentioned this concept in 1936.

The idea of a stored-program computer existed at the Moore School of Electrical Engineering at the University of Pennsylvania before von Neumann even knew of the ENIAC's existence. The exact person who originated the idea there is unknown.

Herman Lukoff credits Eckert (see References).

John William Mauchly and J. Presper Eckert wrote about the stored-program concept in December 1943 during their work on ENIAC. Additionally, ENIAC project administrator Grist Brainerd's December 1943 progress report for the first period of the ENIAC's development implictly proposed the stored program concept (while simultaneously rejecting its implementation in the ENIAC) by stating that "in order to have the simplest project and not to complicate matters" the ENIAC would be constructed without any "automatic regulation."

When the ENIAC was being designed, it was clear that reading instructions from punched cards or paper tape would not be fast enough, since the ENIAC was designed to execute instructions at a much higher rate. The ENIAC's program was thus wired into the design, and it had to be rewired for each new problem. It was clear that a better system was needed. The initial report on the proposed EDVAC was written during the time the ENIAC was being built, and contained the idea of the stored program, where instructions were stored in high-speed memory, so they could be quickly accessed for execution.

Alan Turing presented a paper on February 19, 1946, which included a complete design for a stored-program computer, the Pilot ACE.

von Neumann bottleneck

The separation between the CPU and memory leads to the von Neumann bottleneck, the limited throughput (data transfer rate) between the CPU and memory compared to the amount of memory. In modern machines, throughput is much smaller than the rate at which the CPU can work. This seriously limits the effective processing speed when the CPU is required to perform minimal processing on large amounts of data. The CPU is continuously forced to wait for vital data to be transferred to or from memory. As CPU speed and memory size have increased much faster than the throughput between them, the bottleneck has become more of a problem.

The term "von Neumann bottleneck" was coined by John Backus in his 1977 ACM Turing award lecture. According to Backus:

"Surely there must be a less primitive way of making big changes in the store than by pushing vast numbers of words back and forth through the von Neumann bottleneck. Not only is this tube a literal bottleneck for the data traffic of a problem, but, more importantly, it is an intellectual bottleneck that has kept us tied to word-at-a-time thinking instead of encouraging us to think in terms of the larger conceptual units of the task at hand. Thus programming is basically planning and detailing the enormous traffic of words through the von Neumann bottleneck, and much of that traffic concerns not significant data itself, but where to find it."

The performance problem is reduced by a cache between CPU and main memory, and by the development of branch prediction algorithms. It is less clear whether the intellectual bottleneck that Backus criticized has changed much since 1977. Backus's proposed solution has not had a major influence. Modern functional programming and object-oriented programming are much less geared towards pushing vast numbers of words back and forth than earlier languages like Fortran, but internally, that is still what computers spend much of their time doing.


Sumber:
http://en.wikipedia.org/wiki/Von_Neumann_architecture


von Neumann architecture (1)


The von Neumann architecture is a computer design model that uses a processing unit and a single separate storage structure to hold both instructions and data. It is named after mathematician and early computer scientist John von Neumann. Such a computer implements a universal Turing machine, and the common "referential model" of specifying sequential architectures, in contrast with parallel architectures. The term "stored-program computer" is generally used to mean a computer of this design, although as modern computers are usually of this type, the term has fallen into disuse.



History

The earliest computing machines had fixed programs. Some very simple computers still use this design, either for simplicity or training purposes. For example, a desk calculator (in principle) is a fixed program computer. It can do basic mathematics, but it cannot be used as a word processor or to run video games. To change the program of such a machine, you have to re-wire, re-structure, or even re-design the machine. Indeed, the earliest computers were not so much "programmed" as they were "designed". "Reprogramming", when it was possible at all, was a very manual process, starting with flow charts and paper notes, followed by detailed engineering designs, and then the often-arduous process of implementing the physical changes.

The idea of the stored-program computer changed all that. By creating an instruction set architecture and detailing the computation as a series of instructions (the program), the machine becomes much more flexible. By treating those instructions in the same way as data, a stored-program machine can easily change the program, and can do so under program control.

The terms "von Neumann architecture" and "stored-program computer" are generally used interchangeably, and that usage is followed in this article. However, the Harvard architecture concept should be mentioned as a design which stores the program in an easily modifiable form, but not using the same storage as for general data.

A stored-program design also lets programs modify themselves while running. One early motivation for such a facility was the need for a program to increment or otherwise modify the address portion of instructions, which had to be done manually in early designs. This became less important when index registers and indirect addressing became customary features of machine architecture. Self-modifying code is deprecated today since it is hard to understand and debug, and modern processor pipelining and caching schemes make it inefficient.

On a large scale, the ability to treat instructions as data is what makes assemblers, compilers and other automated programming tools possible. One can "write programs which write programs".[1] On a smaller scale, I/O-intensive machine instructions such as the BITBLT primitive used to modify images on a bitmap display, were once thought to be impossible to implement without custom hardware. It was shown later that these instructions could be implemented efficiently by "on the fly compilation" technology, e.g. code-generating programs.

There are drawbacks to the von Neumann design. Aside from the von Neumann bottleneck described below, program modifications can be quite harmful, either by accident or design. In some simple stored-program computer designs, a malfunctioning program can damage itself, other programs, or the operating system, possibly leading to a crash. A buffer overflow is one very common example of such a malfunction. The ability for programs to create and modify other programs is also frequently exploited by malware. Malware might use a buffer overflow to smash the call stack and overwrite the existing program, and then proceed to modify other program files on the system to propagate the compromise. Memory protection and other forms of access control can help protect against both accidental and malicious program modification.

Sumber:
http://en.wikipedia.org/wiki/Von_Neumann_architecture

John von Neumann

John von Neumann (Hungarian Margittai Neumann János Lajos; born December 28, 1903 in Budapest, Austria-Hungary; died February 8, 1957 in Washington D.C., United States) was an Austria-Hungary-born American mathematician who made contributions to quantum physics, functional analysis, set theory, topology, economics, computer science, numerical analysis, hydrodynamics (of explosions), statistics and many other mathematical fields as one of history's outstanding mathematicians.[1] Most notably, von Neumann was a pioneer of the application of operator theory to quantum mechanics (see von Neumann algebra), a member of the Manhattan Project and the Institute for Advanced Study at Princeton (as one of the few originally appointed — a group collectively referred to as the "demi-gods"), and the co-creator of game theory and the concepts of cellular automata and the universal constructor. Along with Edward Teller and Stanislaw Ulam, von Neumann worked out key steps in the nuclear physics involved in thermonuclear reactions and the hydrogen bomb.

Biography

The eldest of three brothers, von Neumann was born Neumann János Lajos (Hungarian names have the family name first) in Budapest, Hungary, to a Jewish family. His father was Neumann Miksa (Max Neumann), a lawyer who worked in a bank. His mother was Kann Margit (Margaret Kann). János, nicknamed "Jancsi" (Johnny), was an extraordinary prodigy. At the age of six, he could divide two 8-digit numbers in his head.

He entered the German speaking Lutheran Gymnasium in Budapest in 1911. In 1913 his father was rewarded with ennoblement for his service to the Austro-Hungarian empire, the Neumann family acquiring the Hungarian mark of Margittai, or the Austrian equivalent von. Neumann János therefore became János von Neumann, a name that he later changed to the German Johann von Neumann. After teaching as history's youngest Privatdozent of the University of Berlin from 1926 to 1930, he, his mother, and his brothers emigrated to the United States; this in the early 1930s, after Hitler's rise to power in Germany. He anglicized Johann to John, he kept the Austrian-aristocratic surname of von Neumann, whereas his brothers adopted surnames Vonneumann and Neumann (using the de Neumann form briefly when first in the US).

Although von Neumann unfailingly dressed formally, he enjoyed throwing extravagant parties and driving hazardously (frequently while reading a book, and sometimes crashing into a tree or getting arrested). He once reported one of his many car accidents in this way: "I was proceeding down the road. The trees on the right were passing me in orderly fashion at 60 miles per hour. Suddenly one of them stepped in my path."[2] He was a profoundly committed hedonist who liked to eat and drink heavily (it was said that he knew how to count everything except calories), tell dirty stories and very insensitive jokes (for example: "bodily violence is a displeasure done with the intention of giving pleasure"), and persistently gaze at the legs of young women (so much so that female secretaries at Los Alamos often covered up the exposed undersides of their desks with cardboard.)

He received his Ph.D. in mathematics (with minors in experimental physics and chemistry) from the University of Budapest at the age of 23. He simultaneously earned his diploma in chemical engineering from the ETH Zurich in Switzerland at the behest of his father, who wanted his son to invest his time in a more financially viable endeavour than mathematics. Between 1926 and 1930 he was a private lecturer in Berlin, Germany.

By age 25 he had published 10 major papers, and by 30, nearly 36.[citation needed]

Von Neumann was invited to Princeton, New Jersey in 1930, and was one of four people selected for the first faculty of the Institute for Advanced Study (two of the others were Albert Einstein and Kurt Gödel), where he was a mathematics professor from its formation in 1933 until his death.

From 1936 to 1938 Alan Turing was a visitor at the Institute, where he completed a Ph.D. dissertation under the supervision of Alonzo Church at Princeton. This visit occurred shortly after Turing's publication of his 1936 paper "On Computable Numbers with an Application to the Entscheidungsproblem" which involved the concepts of logical design and the universal machine. Von Neumann must have known of Turing's ideas but it is not clear whether he applied them to the design of the IAS machine ten years later.

In 1937 he became a naturalized citizen of the US. In 1938 von Neumann was awarded the Bôcher Memorial Prize for his work in analysis.

Von Neumann married twice. He married Mariette Kövesi in 1930. When he proposed to her, he was incapable of expressing anything beyond "You and I might be able to have some fun together, seeing as how we both like to drink."[citation needed] Von Neumann agreed to convert to Catholicism to placate her family and remained a Catholic until his death. The couple divorced in 1937. He then married Klara Dan in 1938. Von Neumann had one child, by his first marriage, a daughter named Marina. She is a distinguished professor of international trade and public policy at the University of Michigan.

Von Neumann was diagnosed with bone cancer or pancreatic cancer in 1957, possibly caused by exposure to radioactivity while observing A-bomb tests in the Pacific or in later work on nuclear weapons at Los Alamos, New Mexico. (Fellow nuclear pioneer Enrico Fermi had died of stomach cancer in 1954.) Von Neumann died within a few months of the initial diagnosis, in excruciating pain. The cancer had spread to his brain, inhibiting mental ability. When at Walter Reed Hospital in Washington, D.C., he invited Roman Catholic priest (Father Anselm Strittmatter), who administered him the last Sacraments.[3] He died under military security lest he reveal military secrets while heavily medicated. John Von Neumann was buried at Princeton Cemetery in Princeton, Mercer County, New Jersey.

He wrote 150 published papers in his life; 60 in pure mathematics, 20 in physics, and 60 in applied mathematics. He was developing a theory of the structure of the human brain before he died.

Von Neumann entertained notions which would now trouble many. His love for meteorological prediction led him to dream of manipulating the environment by spreading colorants on the polar ice caps in order to enhance absorption of solar radiation (by reducing the albedo) and thereby raise global temperatures. He also favored a preemptive nuclear attack on the USSR, believing that doing so could prevent it from obtaining the atomic bomb.[4]

Sumber:
http://en.wikipedia.org/wiki/Von_Neumann

AMD Turion 64

Turion 64 is the brand name AMD applies to its 64-bit low-power (mobile) processors codenamed K8L [1]. The Turion 64 and Turion 64 X2 processors compete with Intel's mobile processors, initially the Pentium M and currently the Intel Core and Intel Core 2 processors.

Turion 64 processors are compatible with AMD's Socket 754 and are equipped with 512 or 1024 KiB of L2 cache, a 64-bit single channel on-die memory controller, and an 800MHz HyperTransport bus. Battery saving features, like PowerNow!, are central to the marketing and usefulness of these CPUs.

Features

Turion 64 "Lancaster" (90nm)

All models support:

Turion 64 "Richmond" (90nm)

The models support the same features available in Lancaster, plus AMD Virtualization.

Model naming methodology

The model naming scheme does not make it obvious how to compare one Turion with another, or even an Athlon 64. The model name is two letters, a dash, and a two digit number (for example, ML-34). The two letters together designate a processor class, while the number represents a PR rating. The first letter is M for single core processors and T for dual core Turion 64 X2 processors. The later in the alphabet that the second letter appears, the more the model has been designed for mobility (frugal power consumption). Take for instance, an MT-30 and an ML-34. Since the T in the MT-30 is later in the alphabet than the L in ML-34, the MT-30 consumes less power than the ML-34. But since 34 is greater than 30, the ML-34 is faster than the MT-30.

Sumber:
http://en.wikipedia.org/wiki/Turion_64

AMD Turion 64 X2

Turion 64 X2 is AMD's 64-bit dual-core mobile CPU, intended to compete with Intel's Core and Core 2 CPUs. The Turion 64 X2 was launched on May 17, 2006, after several delays. These processors use Socket S1, and feature DDR2 memory. They also include AMD Virtualization Technology and more power-saving features.

AMD first produced the Turion 64 X2 on IBM's 90 nm Silicon on insulator (SOI) process (cores with the Taylor codename). As of May 2007, they have switched to a 65 nm Silicon-Germanium stressed process[citation needed], which was recently achieved through the combined effort of IBM and AMD, with 40% improvement over comparable 65 nm processes[citation needed]. The earlier 90 nm devices were codenamed Taylor and Trinidad, while the newer 65 nm cores have codename Tyler.

Cores

Taylor & Trinidad (90 nm SOI)

Turion64-X2 for Socket S1
Turion64-X2 for Socket S1
  • Dual AMD64 core
  • L1 cache: 64 + 64 KiB (data + instructions) per core
  • L2 cache: 256 KiB (Taylor) or 512 KiB (Trinidad) per core, fullspeed
  • Memory controller: dual channel DDR2-667 MHz X2 (using full-duplex hypertransport bus renders CPU-RAM bandwidth of 10.7 Gbytes/s --> twice the speed of Intel's Core 2 Duo Merom which is only capable of 5.3 Gbytes/s with its 667MHz FSB Half-Duplex Memory controller)[1]
  • MMX, Extended 3DNow!, SSE, SSE2, SSE3, AMD64, PowerNow!, NX bit, AMD-V
  • Socket S1, HyperTransport (800 MHz, 1600 MT/s, 10.7 GB/s CPU-RAM + 6.4 GB/s CPU-I/O transfer rate)[2]
  • Power consumption (TDP): 31, 33, 35 Watt max
  • First release: May 17, 2006
  • Clock rate: 1600, 1800, 2000 MHz
    • 31W TDP:
      • TL-50: 1600 MHz (256 KiB L2-Cache per core)
      • TL-52: 1600 MHz (512 KiB L2-Cache per core)
    • 33W TDP:
      • TL-56: 1800 MHz (512 KiB L2-Cache per core)
    • 35W TDP:
      • TL-60: 2000 MHz (512 KiB L2-Cache per core)
      • TL-64: 2200 MHz (512 KiB L2-Cache per core

Tyler (65 nm SOI)

  • Dual AMD64 core
  • L1 cache: 64 + 64 KiB (data + instructions) per core
  • L2 cache: 512 KiB per core, fullspeed
  • Memory controller: dual channel DDR2-800 MHz (12.8 GBytes/s full-duplex CPU/RAM bandwidth)
  • 100 MHz granularity (Dynamic P-state Transitions)
  • MMX, Extended 3DNow!, SSE, SSE2, SSE3, AMD64, PowerNow!, NX Bit, AMD-V
  • Socket S1, HyperTransport (1600 MHz)
  • Power consumption (TDP): 31, 33, 35 Watt max
  • First release: 2007
  • Clock rate: 1800, 1900, 2000, 2200, 2300 MHz
    • 31W TDP:
      • TL-56 1800 MHz (512 KiB L2-Cache per core)
      • TL-58 1900 MHz (512 KiB L2-Cache per core)
      • TL-60 2000 MHz (512 KiB L2-Cache per core)
    • 35W TDP:
      • TL-64 2200 MHz (512 KiB L2-Cache per core)
      • TL-66 2300 MHz (512 KiB L2-Cache per core)

Sumber:
http://en.wikipedia.org/wiki/Turion_64_X2

AMD Opteron

The AMD Opteron (codenamed SledgeHammer during development) was the first of AMD's eighth-generation x86 processors based on the K8 or Hammer core, and the first processor to implement the AMD64 (formerly x86-64) instruction set architecture. It was released on April 22, 2003 and was intended to compete in the server market, particularly in the same segment as the Intel Xeon processor.

Technical description


The two key capabilities

Opteron combines two important capabilities in a single processor die:

  1. native execution of legacy x86 32-bit applications without speed penalties
  2. native execution of x86-64 64-bit applications (linear-addressing beyond 4 GB RAM)

The first capability is notable because at the time of Opteron's introduction, the only other 64-bit processor architecture marketed with 32-bit x86 compatibility (Intel's Itanium) ran x86 legacy-applications only with significant speed degradation. The second capability, by itself, is less noteworthy, as all major RISC players (Sun SPARC, DEC Alpha, HP PA-RISC, IBM POWER, SGI MIPS, etc.) have had 64-bit implementations for many years. In combining these two capabilities, however, the Opteron has earned recognition for its ability to run the vast installed base of x86 applications economically, while simultaneously offering an upgrade-path to 64-bit computing.

The Opteron processor possesses an integrated DDR SDRAM / DDR2 SDRAM(Socket F) memory controller. This both reduces the latency penalty for accessing the main RAM and eliminates the need for a separate northbridge chip.

Multi-processor features

In multi-processor systems (more than one Opteron on a single motherboard), the CPUs communicate using the Direct Connect Architecture over high-speed HyperTransport links. Each CPU can access the main memory of another processor, transparent to the programmer. The Opteron approach to multi-processing is not the same as standard symmetric multiprocessing as instead of having one bank of memory for all CPUs, each CPU has its own memory. The Opteron CPU directly supports up to an 8-way configuration, which can be found in mid-level servers. Enterprise-level servers use additional (and expensive) routing chips to support more than 8 CPUs per box.

In a variety of computing benchmarks, the Opteron architecture has demonstrated better multi-processor scaling than the Intel Xeon[citation needed]. This is primarily because adding an additional Opteron processor increases bandwidth, while that is not always the case for Xeon systems, and the fact that the Opterons use a switched fabric, rather than a shared bus. In particular, the Opteron's integrated memory controller, when using Non-Uniform Memory Access (NUMA), allows the CPU to access local RAM quickly. In contrast, multiprocessor Xeon system CPUs share only two common buses for both processor-processor and processor-memory communication. As the number of CPUs increases in a Xeon system, contention for the shared bus causes computing efficiency to drop.

Multi-core Opterons

In May of 2005, AMD introduced its first "Multi-Core" Opteron CPUs. At the present time, the term "Multi-Core" at AMD in practice means "dual-core"; each physical Opteron chip actually contains two separate processor cores. This effectively doubles the compute-power available to each motherboard processor socket. One socket can now deliver the performance of two processors, two sockets can deliver the performance of four processors, and so on. Since motherboard costs go up dramatically as the number of CPU sockets increases, multicore CPUs now allow much higher performing systems to be built with more affordable motherboards.

AMD's model number scheme has changed somewhat in light of its new multicore lineup. At the time of its introduction, AMD's fastest multicore Opteron was the model 875, with two cores running at 2.2 GHz each. AMD's fastest single-core Opteron at this time was the model 252, with one core running at 2.6 GHz. For multithreaded applications, the model 875 would be much faster than the model 252, but for single threaded applications the model 252 would perform faster.

Next-Generation AMD Opteron processors are offered in three series: the 1200 Series (up to 1P/2-core), the 2200 Series (up to 2P/4-core), and the 8200 Series (4P/8-core to 8P/16-core). The 1200 Series is built on AMD's new Socket AM2. The 2200 Series and 8200 Series are built on AMD's new Socket F (1207).

AMD is expected to launch quad core[1] Opteron chips in August 2007 [2] with hardware vendors to follow suit with servers in the following month. Based on a core design codenamed Barcelona, new power and thermal management techniques are planned for the chips. Existing dual core DDR2 based platforms will be upgradeable to quad core chips[3].

Sumber:
http://en.wikipedia.org/wiki/Opteron

Intel Pentium Dual-Core

The Pentium Dual-Core brand (limited to 2007[1]) refers to lower-end x86-architecture microprocessors from Intel. They were based on either the 32-bit Yonah or 64-bit Allendale processors (with very different microarchitectures) targeted at mobile or desktop computers respectively.

The first processors using the brand appeared in notebook computers in early 2007. Those processors, named Pentium T2060, T2080, and T2130[2], had the 32-bit Pentium M-derived Yonah core, and closely resembled the Core Duo T2050 processor with the exception of having 1 MB L2 cache instead of 2 MB[3]. All three of them had a 533 MHz FSB connecting CPU with memory. Apparently, "Intel developed the Pentium Dual-Core at the request of laptop manufacturers"[4].

Subsequently, on June 3, 2007, Intel released the desktop Pentium Dual-Core branded processors[5] known as the Pentium E2140 and E2160[6]. Those processors support the Intel64 extensions, being based on the newer, 64-bit Allendale core with Core microarchitecture. These closely resembled the Core 2 Duo E4300 processor with the exception of having 1 MB L2 cache instead of 2 MB[7]. Both of them had an 800 MHz FSB. They targeted the budget market above the Intel Celeron (Conroe-L single-core series) processors featuring only 512 kB of L2 cache. Such step marked a change in the Pentium brand, relegating it to the budget segment rather than its former position as the mainstream/premium brand.

In 2006, Intel announced a plan[8] to return the Pentium brand from retirement to the market, as a moniker of low-cost Core architecture processors based on single-core Conroe-L, but with 1 MB cache. The numbers for those planned Pentiums were similar to the numbers of the latter Pentium Dual-Core CPUs, but with the first digit "1", instead of "2", suggesting their single-core functionallity. Apparently, a single-core Conroe-L with 1 MB cache was not strong enough to distinguish the planned Pentiums from other planned Celerons, so it was substituted by dual-core CPUs bringing the "Dual-Core" add-on to the "Pentium" moniker.

Processor List

Model Number sSpec Number Frequency L2-Cache Front Side Bus Mult Voltage TDP Socket Release Date Part Number(s) Release Price (USD)
Pentium Dual-Core E2140 SLA3J (L2) 1600 MHz 1024 KiB 800 MT/s 8x 1.162 - 1.312 V 65 W LGA 775 June 3, 2007 HH80557PG0251M $74
SLA93 (M0) July 22, 2007
Pentium Dual-Core E2160 SLA3H (L2) 1800 MHz 1024 KiB 800 MT/s 9x 1.162 - 1.312 V 65 W LGA 775 June 3, 2007 HH80557PG0331M $84
SLA8Z (M0) July 22, 2007


Sumber:
http://en.wikipedia.org/wiki/Intel_Pentium_Dual-Core
http://en.wikipedia.org/wiki/List_of_Intel_Pentium_Dual-Core_microprocessors

Pentium M

The Pentium M[1] brand refers to only two single-core 32-bit CPUs (with the Intel P6 microarchitecture of x86 microprocessors) introduced in March 2003 (during the reign of the Pentium 4 desktop CPUs), and forming a part of the Intel Centrino platform. The processors evolved from the last Pentium III branded CPU, and were intended for use in laptop personal computer only, thus the "M" moniker standing for mobile. The first Pentium M branded CPU, codenamed the Banias, was followed by the second one - the Dothan. After the Pentium M branded processors, Intel released the Core branded dual-core mobile Yonah CPU with a modified microarchitecture. Pentium M branded CPUs can be considered as the end of the Intel P6 microarchitecture.

Overview

The Pentium M represented a new and radical departure for Intel, as it was not a low-power version of the desktop-oriented Pentium 4, but instead a heavily modified version of the Pentium III Tualatin design (itself based on the Pentium Pro core design). It is optimised for power efficiency, a vital characteristic for extending notebook computer battery life. Running with very low average power consumption and much lower heat output than desktop processors, the Pentium M runs at a lower clock speed than the laptop version of the Pentium 4 (The Pentium 4-Mobile, or P4-M), but with similar performance - a 1.6 GHz Pentium M can typically attain the performance of a 2.4 GHz Pentium 4-M. [1]

The Pentium M coupled the execution core of the Pentium III with a Pentium 4 compatible bus interface, an improved instruction decoding/issuing front end, improved branch prediction, SSE2 support, and a much larger cache. The usually power-hungry secondary cache uses an access method to avoid switching on any parts of it which are not being accessed. Other power saving methods include dynamically variable clock frequency and core voltage, allowing the Pentium M to throttle clock speed when the system is idle in order to conserve energy, using the SpeedStep 3 technology (which has more sleep stages than previous versions of SpeedStep). With this technology, a 1.6 GHz Pentium M can effectively throttle to clock speeds of 600 MHz, 800 MHz, 1000 MHz, 1200 MHz, 1400 MHz and 1600 MHz; these intermediate clock states allow the CPU to better throttle clock speed to suit conditions. The power requirements of the Pentium M varies from 5 watts when idle to 27 watts at full load. This is useful to notebook manufacturers as it allows them to include the Pentium M into smaller notebooks.

Although Intel has marketed the Pentium M exclusively as a mobile product, motherboard manufacturers such as AOpen, DFI and MSI have been shipping Pentium M compatible boards designed for enthusiast, HTPC, workstation and server applications. An adapter, the CT-479, has also been developed by ASUS to allow the use of Pentium M processors in selected ASUS motherboards designed for Socket 478 Pentium 4 processors. Shuttle Inc. offers packaged Pentium M desktops, marketed for low energy consumption and minimal cooling system noise.

Pentium M processors are also of interest to embedded systems' manufacturers because the low power consumption of the Pentium M allows the design of fanless and miniaturized embedded PCs.

Sumber: http://en.wikipedia.org/wiki/Pentium_M