[CMSC 411 Home] | [Syllabus] | [Project] | [VHDL resource] | [Homework 1-6] | [lecture notes]

[Homework 7-12] | [news] | [files]

Top 500 supercomputers www.top500.org |

CMSC 411 news from various sources

Yes, some may be old news.

Yet, old news may be good or bad news

Many topics related to computers and software


EE and Quantum relations to Human brain October 10, 2018

ELECTRICAL COMMUNICATION IN THE HUMAN BODY, THEORY AND APPLICATION Every action of thought or movement is governed by complex electrical signals. While electrical communication and signal processing is commonly associated with analog and digital circuitry, bioelectricity follows similar concepts. This presentation will provide an introduction to bioelectricity in the heart and discuss the similarities of bioelectrical conduction and cable theory, application of passive (RC) electrical filters, and electromagnetic fields. Building: National Electronics Museum Human brain neurons have some similarity to quantum qbits. Statistically about 1,000,000 neurons fire incorrectly every second of human life. Fortunately, this is a very small percent of the neurons in a human brain. Thus, we often here "To err is Human.".

supercomputer September 10, 2018

Japanese supercomputer ATERUI II at the National Astronomical Observatory of Japan is the fastest system in the world dedicated entirely to astronomy research. ATERUI II, a Cray XC50 system, is number 83 on the TOP500 ranking of supercomputers. Researchers brought ATERUI II online in June with more than 40,000 total processing cores supporting up to 3 quadrillion operations per second. The supercomputer's Intel Xeon Gold 6148 processors each come with 20 cores with a maximum frequency of 3.7GHz and 27.5MB of cache. ATERUI II also offers 385 terabytes of RAM, and is expected to be one of the world's best multitasking supercomputers, dividing a simulation into sections and working from multiple angles. ATERUI II is powerful enough to model gravitational variables for a galaxy of 100 billion individual stars. More than 150 teams are scheduled to use the supercomputer this year. Because simulations help inform future observations, ATERUI II will help scientists decide on targets and observational methods more rapidly.

robotics September 7, 2018

Researchers at Tel Aviv University in Israel have developed Robat, a fully autonomous bat-like robot that uses echolocation to move through new environments. Robat includes a biologically plausible signal processing approach to extract information about an object's position and identity, a feature that makes the system unique among other attempts to apply sonar to robotics. The system contains an ultrasonic speaker that mimics the mouth of a real bat and produces frequency modulated chirps similar to those of bats. In addition, Robat is equipped with two ultrasonic microphones that mimic ears. The robot uses an artificial neural network to delineate the borders of objects it encounters and classify them. During testing, Robat was able to move autonomously through novel outdoor environments and map them using only sound. The researchers also found that the robot was able to classify objects with a 68% balanced accuracy, and to determine obstacles with a 70% accuracy.

AMD vs Intel June 12, 2018

Hot on the heels of Intel's demonstration run of a 28-core CPU, AMD has decided it too can participate in the core wars and demonstrated its Threadripper 2 CPU with a full 32 cores and 64 threads at its Computex keynote. Built on a refined Zen architecture dubbed Zen+, it will be built on a 12nm process and slot into the existing TR4 socket. Launch is slated for the third quarter of 2018, possibly ahead of Intel's 28-core offering.

AI to mimic voice March 7, 2018

Researchers at Chinese search giant Baidu say they have developed an artificial intelligence that can learn to precisely mimic a person's voice based on less than 60 seconds' worth of listening to it. They note this milestone uses Baidu's text-to-speech synthesis system Deep Voice, which was trained on more than 800 hours of audio from 2,400 speakers. The team says Deep Voice requires only 100 five-second segments of vocal training data to sound its best, but a version trained on only 10 five-second samples was able to deceive a voice-recognition system more than 95 percent of the time. "We see many great use cases or applications for this technology," says Baidu's Leo Zou. "For example, voice cloning could help patients who lost their voices. This is also an important breakthrough in the direction of personalized human-machine interfaces." Zou also thinks the technique could advance the creation of original digital content.

NIST Supercomputing synapse Jan 29,2018

Synapse disordered ordered composite, illustration NIST's Superconducting Synapse May Be Missing Piece for 'Artificial Brains' NIST News Laura Ost January 26, 2018 Researchers at the U.S. National Institute of Standards and Technology (NIST) say they have constructed a superconducting "synapse" switch that "learns" in the manner of a biological system and which could link processors and store memories in future computers that operate like the human brain. The team views the synapse as a key ingredient for neuromorphic computers, and it consists of a compact metallic cylinder 10 micrometers in diameter. The device processes incoming electrical spikes to tailor spiking output signals, with processing based on a flexible internal design that is experientially or environmentally tunable. The NIST synapse also fires 1 billion times a second--much more than a human synapse--while using only about one-10,000th as much energy. The researchers note the synapse would be employed in neuromorphic systems built from superconducting components, which can transmit electricity without resistance, with data transmitted, processed, and stored in units of magnetic flux.

Jean Sammet Jean Sammet, Co-Designer of a Pioneering Computer Language

Dies at 89 The New York Times Steve Lohr June 4, 2017 Software engineer Jean E. Sammet, who co-designed the Common Business Oriented Language (COBOL) and was elected the first female president of the ACM in 1974, passed away on May 20 at the age of 89. Sammet achieved a level of prominence in computing beyond most women of her generation, and she once said her ambition was "to put every person in communication with the computer," according to University of Maryland professor Ben Shneiderman. The Computer History Museum's Dag Spicer says Sammet's book, "Programming Languages: History and Fundamentals," published in 1969, "was, and remains, a classic" in the field. COBOL remains an essential element in the mainframes underlying corporate and government agency operations worldwide. Sammet worked with five other programmers designing COBOL over a period of two weeks, and the language enabled innovative techniques for describing and representing data in computer code. Sammet later worked to inject more engineering discipline into the language.

20 Apr 2017 Graphene

The new method, reported today in Nature, uses graphene­single-atom-thin sheets of graphite­as a sort of "copy machine" to transfer intricate crystalline patterns from an underlying semiconductor wafer to a top layer of identical material. The engineers worked out carefully controlled procedures to place single sheets of graphene onto an expensive wafer. They then grew semiconducting material over the graphene layer. They found that graphene is thin enough to appear electrically invisible, allowing the top layer to see through the graphene to the underlying crystalline wafer, imprinting its patterns without being influenced by the graphene. Graphene is also rather "slippery" and does not tend to stick to other materials easily, enabling the engineers to simply peel the top semiconducting layer from the wafer after its structures have been imprinted.

Bristol and Bath to Build World's Largest ARM-Based Supercomputer

TechSPARK April 3, 2017 Researchers at the universities of Bristol, Bath, Cardiff, and Exeter in the U.K. are working together to build Isambard, the world's largest ARM-based supercomputer. Most conventional supercomputers use Intel x86 processors, but the new system will have 10,000 64-bit ARM cores. The Isambard supercomputer will sit between the large national Archer high-performance computing (HPC) service and the local HPC clusters within individual universities. Isambard will be one of the world's first systems to be based on the Vulcan server-class chip, which promises more memory bandwidth instead of higher peak performance. The researchers say Isambard's architecture will make it very enticing for scientists studying complex problems. In addition, the researchers are providing a service to enable algorithm development and the porting and optimization of scientific codes to ARM64 machines. Isambard also will be equipped with a small number of processors based on other advanced architectures.

IEEE Spectrum Tech Alert March 23, 2017

Efficiency of Silicon Solar Cells Climbs A silicon solar cell, at its very best, will never be able to transform more than 29 percent of the sun’s rays into electricity. But we’ll take that if we can get it. That’s why it was big news when researchers at Kaneka Corp., in Osaka, Japan, reported this week that they’ve developed a silicon solar cell with 26.3 percent efficiency (a 0.7 percent increase over the previous record). IEEE Spectrum is tracking these slim but significant efficiency boosts in a PV interactive that is updated as soon as new, record-breaking devices are certified.

MIT News 01/30/2017 parallel from compiler

Researchers at the Massachusetts Institute of Technology (MIT) next week will present a modified version of a popular open source compiler that optimizes prior to adding the code needed for parallel execution at the ACM Symposium on Principles and Practice of Parallel Programming (PPoPP 2017) in Austin, TX. The compiler "now optimizes parallel code better than any commercial or open source compiler, and it also compiles where some of these other compilers don't," says MIT professor Charles E. Leiserson, who received the ACM Paris Kanellakis Theory and Practice Award for 2013 and the ACM-IEEE Computer Society Ken Kennedy Award for 2014. The enhancement stems from optimization approaches that already existed in the modified compiler. The main advancement is an intermediate representation using a fork-join model of parallelism, with the compiler's front end customized to a fork-join language called Cilk, which adds only two commands to the C programming language: the fork-initiating "spawn" and the join-initiating "sync." Cilk-written programs must explicitly tell the runtime when to check on the progress of computations and rebalance cores' assignments with these invocations tracked by the compiler. The MIT team's intermediate representation adds three commands--detach, reattach, and sync--to a compiler's middle end. The reattach command specifies the recombination of parallel tasks' results, making fork-join code resemble serial code so many of a serial compiler's optimization algorithms will work on it without alteration.

SSD drive data

SDD Solid State Drives 10/8/2016 OCZ Intrepid 3700 Series IT3RSK41ET5H0-1920 2TB SATA SSD OCZ Technology Part #/MPN: IT3RSK41ET5H0-1920 Enterprise Grade Capacity: 2TB Speed: Solid State Memory Interface Types: SATA Form Factor: 2.5inx9mm Fits Most laptops Encryption/SED Supported Electrical Interface: SATA 600 - 6.0Gbps Up to 5x faster than previous SATA generation Features superior sustained performance and I/O consistency Enterprise-grade endurance with 5 year warranty for both read-intensive and mixed workload models Advanced reliability feature-set PERFORMANCE Sustained Sequential Read 540 MB/s Sustained Sequential Wri 470 MB/s Sustained 4K Random Read 70,000 IOPS Sustained 4K Random Write 9,000 IOPS RELIABILITY/SECURITY MTBF 2 million hours Bit Error Rate (BER) 1 sector per 10^17 bits read Power Fail Protection Full in flight data protection for unexpected system power loss Data Path Protection End to End protection via CRC Data Encryption 256-bit AES-compliant Product Health Monitoring Self-Monitoring, Analysis and Reporting Technology (SMART) Support with enterprise attributes

Seagate Demonstrates Humongous 60TB Solid-State Drive

The previous record-capacity SSD is a 2.5-inch, 16TB unit released a year ago by Samsung that costs $7,000. Seagate's SSD is in demo mode only at this time. Santa Clara at the Flash Memory Summit conference Aug 9,2016 attendees were observed actually scratching their heads and wondering, "How is that possible?" The object of their incredulity was a monstrous 60TB solid-state hard drive introduced by Seagate Technology. This is a rather precipitous leap in capacity, to say the very least, from the previous record-capacity SSD, which is a 2.5-inch, 16TB unit released a year ago by Samsung that costs $7,000. Seagate said the drive, officially called the 60TB Serial Attached SCSI (SAS) SSD, is the largest-capacity solid-state drive ever demonstrated and it would be impossible to argue that fact. The 2.5-inch SSD is currently in demonstration mode only. Production and distribution of the drive isn't expected until sometime in 2017, Seagate said.

Researchers at Google's U.K. DeepMind 9/8/2016

Researchers at Google's U.K.-based DeepMind unit say they have made considerable progress in developing computer-generated speech, claiming tests on their WaveNet system with human listeners demonstrated significant reduction of a gap in quality between modern computers and human speech. A source familiar with the WaveNet research says the system diverges from existing text-to-speech solutions by concentrating on the actual sound waves being produced, instead of using human voice recordings to reassemble the sounds to match the language being spoken. WaveNet employs a neural network to analyze raw waveforms and attempts to model probable patterns. The source says the system is highly complex, and must digest at least 16,000 waveform samples every second, producing vast volumes of data. DeepMind says WaveNet's ability to model sound waves enables it to create speech that imitates any human voice, and that it can produce short piano compositions by sampling classical music. The researchers note computerized speech generation has garnered less interest than natural language recognition in the recent artificial intelligence race. "Allowing people to converse with machines is a longstanding dream of human-computer interaction," the DeepMind team says.

Quantum Computing May Have Scored in Australian Research Funding

ZDNet (09/08/16) Asha McLean

The Australian government is allocating more than $200 million in funding to nine new Australian Research Council Centers of Excellence, including centers devoted to developing advanced quantum technologies. The University of Queensland center will build quantum machines for practical applications and create quantum materials, engines, and imaging systems. Possible applications include material simulators, diagnostic technologies, and geographical surveying tools. The University of New South Wales (UNSW) received funding for its ARCCenter of Excellence for Quantum Computation and Communication Technology, which was opened in April. UNSW researchers currently are working to build the world's first quantum computer in silicon. The university already has developed a way to write and manipulate a quantum code using two quantum bits in a silicon microchip, and a team of engineers has built a quantum logic gate in silicon.

US new supercomputer

Computerworld (06/21/2016) Patrick Thibodeau

The U.S. plans to have a supercomputer by early 2018 that will offer about twice the performance of China's Sunway TaihuLight system, which can reach a theoretical peak speed of 124.5 petaflops. The U.S. Department of Energy's (DOE) Oak Ridge National Laboratory is expecting an IBM system, called Summit, which will be capable of 200 petaflops within the next two years. Summit will use IBM and Nvidia graphical-processing units, while the DOE has two other major supercomputers planned for 2018. One system, Sierra, is a planned 150-petaflop IBM system that will be located at the Lawrence Livermore National Lab, and is scheduled to be available by mid-2018. The other supercomputer, a Cray/Intel system called Aurora, is due by late 2018 at the Argonne National Laboratory. The U.S. government is pursuing the National Strategic Computing Initiative, which calls for accelerating the delivery of exascale computing, increasing coherence between the technology base used for modeling and simulation and that for data analytic computing, charting a path forward to a post-Moore's Law era, and building the overall capacity and capability of an enduring national high-performance computing ecosystem. "[The] strength of the U.S. program lies not just in hardware capability, but also in the ability to develop software that harnesses high-performance computing for real-world scientific and industrial applications," the DOE says.

Nvidia GPU-Powered Autonomous Car Teaches Itself to See and Steer

Network World (04/28/2016) Steven Max Patterson

An Nvidia engineering team built an autonomous car that combines a camera, a Drive-PX embedded computer, and 72 hours of training data. The researchers trained a convolutional neural network (CNN) to map raw pixels from the camera directly to steering commands. Three cameras and two computers were utilized by the training system to obtain three-dimensional video images and steering angles from the vehicle driven by a human. Nvidia researchers watched for changes in the steering angle as the training signal mapped the human driving patterns into bitmap images recorded by the cameras, and learning was enabled using the CNN to generate the internal representations of the processing steps of driving. The open source machine-learning system Torch 7 was used to render the learning into the processing steps that autonomously saw the road, other vehicles, and obstacles to steer the test vehicles. The steering directions the CNN performed in a simulated response to the 10-frames-per-second images captured by the human-driven car were compared to the human steering angles, teaching the system to see and steer. On-road testing proved CNNs can learn the task of lane detection and road following without manually and explicitly deconstructing and classifying road or lane markings, semantic abstractions, path planning, and control.

World's First Parallel Computer Based on Biomolecular Motors

Design a Computer to solve a problem ACM TechNews 3/2/2016

A new parallel-computing approach can solve combinatorial problems, according to a study published in Proceedings of the National Academy of Sciences. Researchers from the Max Planck Institute of Molecular Cell Biology and Genetics and the Dresden University of Technology collaborated with an international team on the technology. The researchers note significant advances have been made in conventional electronic computers in the past decades, but their sequential nature prevents them from solving problems of a combinatorial nature. The number of calculations required to solve such problems grows with the size of the problem, making them intractable for sequential computing. The new approach addresses these issues by combining well-established nanofabrication technology with molecular motors that are very energy-efficient and inherently work in parallel. The researchers demonstrated the parallel-computing approach on a benchmark combinatorial problem that is very difficult to solve with sequential computers. The team says the approach is scalable, error-tolerant, and dramatically improves the time to solve combinatorial problems of size N. The problem to be solved is "encoded" within a network of nanoscale channels by both mathematically designing a geometrical network that is capable of representing the problem, and by fabricating a physical network based on this design using lithography. The network is then explored in parallel by many protein filaments self-propelled by a molecular layer of motor proteins covering the bottom of the channels.

The need for speed 1/26/2016

After 2,500 Years, a Chinese Gaming Mystery Is Solved Computer scientist John Tromp discovered the total number of legal positions on Go's standard 19x19 board using servers at the Institute for Advanced Study's School of Natural Sciences, the IDA's Center for Communications Research, and Hewlett-Packard's Helion Cloud. The software Tromp used was originally developed in 2005. By 2007, the researchers were able to compute the number of legal positions on a 17x17 board, which exhausted the hardware resources available at the time. Tromp provides the software he used on his GitHub repository, but it requires a server with 15 terabytes of fast scratch diskspace, eight to 16 cores, and 192 GB of RAM. Although the leap from calculating the legal moves on a 17x17 board to a 19x19 board may seem small, each increase in the board's dimensions demands a fivefold increase in the memory, time, and disk space required, according to Tromp. He plans to continue work on his "Cuckoo Cycle" proof-of-work system and solve large-scale Connect Four problems, and he is especially interested in improving similar work on chess. "Having the ability to determine the state complexity of the greatest abstract board game, and not doing it, that just doesn't sit right with me," Tromp says.

A New Quantum Approach to Big Data 2015

Researchers at the Massachusetts Institute of Technology (MIT), the University of Waterloo, and the University of Southern California have developed an approach to using quantum computers to handle massive digital datasets, which could potentially accelerate the solving of astronomically complex problems. Algebraic topology is core to the new technique as it can reduce the impact of the unavoidable distortions that arise every time someone collects data about the real world, says MIT professor Seth Lloyd. Topological analysis "represents a crucial way of getting at the significant features of the data, but it's computationally very expensive," Lloyd notes. "This is where quantum mechanics kicks in." He cites as an example a dataset with 300 points, saying tackling this challenge with a conventional approach to analyzing all the topological features in that system would require "a computer the size of the universe,"making solving the problem impossible. "That's where our algorithm kicks in,” he says. Lloyd says solving the same problem with the new system, using a quantum computer, requires only 300 quantum bits, and he believes a device this size could be realized in the next few years. "Our algorithm shows that you don't need a big quantum computer to kick some serious topological butt," Lloyd says.

Cadence, Imec announce 5nm test chip tapeout 10/09/2015

Imec and Cadence optimised design rules, libraries and place-and-route technology to obtain optimal power, performance and area (PPA) scaling via Cadence Innovus Implementation System.

How a Microscopic Supercapacitor Will Supercharge Mobile Electronics 10/09/2015

Capacitors are the one type of electronic device inside your computer that never made it to Lilliput. But if they could be made to obey Moore’s Law and eventually shrink down to microscale, they could do things like powering a cellphone for days on a single charge. A group of researchers at the University of California, Los Angeles, say they’ve created just such a device.

Intel invests $60M in Chinese drone maker 9/1/2015

Drones have gone from being viewed as pricey hobbyist toys to an enticing technology with serious commercial applications including agriculture, construction, product delivery and others.

Fastest Supercomputers, top 500 7/13/2015

China's 33,863-teraflop Tianhe-2 supercomputer retained the number-one position it has held for more than two years on the latest Top 500 ranking of supercomputers. The list, which is compiled by researchers at the University of Mannheim, the University of Tennessee, and Lawrence Berkeley National Laboratory, was released today and saw the number of U.S. supercomputers on the list dip close to an all-time low. The Cray Titan at Oak Ridge National Laboratory and the IBM Sequoia at the Lawrence Livermore National Laboratory came in second and third place, respectively. The only new machine in the top 10 is Saudi Arabia's Shaheen II supercomputer, which is ranked seventh. The Top 500 list is published twice annually and the U.S. has 231 machines on the latest list, near the all-time low of 226 it had in mid-2002. The aggregate computing power of the machines on the latest list is 361 petaflops, up 31 percent from the same time last year.

WORLDS THINNEST TRANSISTOR IS ONLY THREE ATOMS THICK 5/5/2015

Researchers discovered a new process for producing ultra-thin transistors. At just three atoms thick, they’re currently the world’s thinnest piece of electronics. elecp-media.com

Nine Reasons Linux Rules the Supercomputing Space 7/23/2014

The latest TOP500 List of the fastest supercomputers in the world helped many in the technology community understand what open-source aficionados have known for years: Linux has quickly become the operating system of choice in the high-performance computing (HPC) market, growing from relative obscurity 15 years ago to powering 97 percent of the fastest computers in the world. But its appeal is found in more than cost or choice. Here are a few of the main reasons Linux has grown to own the lion's share of the fastest supercomputers in the world. Although the United States remains the top country in terms of overall systems, with 233, this is down from 265 on the November 2013 list. The number of Chinese systems on the list rose from 63 to 76, giving the Asian nation nearly as many supercomputers as the United Kingdom, with 30; France, with 27; and Germany, with 23—combined. Japan also increased its showing, up to 30 from 28 on the previous list. HP has the lead in systems and now has 182 systems (36 percent), compared to IBM, with 176 systems (35 percent). HP had 196 systems (39 percent) six months ago, and IBM had 164 systems (33 percent) six months ago. In the system category, Cray remains third with 10 percent (50 systems). History of Linux on Supercomputers In 1994, the first Beowulf Cluster was built at NASA, using Linux, as an alternative to the very expensive HPC supercomputers. "Beowulf Clusters are scalable performance clusters based on commodity hardware, on a private system network, with open-source software (Linux) infrastructure. The designer can improve performance proportionally with added machines. The commodity hardware can be any of a number of mass-market, stand-alone compute nodes as simple as two networked computers, each running Linux and sharing a file system, or as complex as 1,024 nodes with a high-speed, low-latency network."

Earthquake Simulation Tops One Quadrillion Flops: Over a petaflop.

Computational Record on SuperMUC 04/15/14 The SuperMUC high-performance computer at the Leibniz Supercomputing Center of the Bavarian Academy of Sciences and Humanities has exceeded the 1 petaflop per second mark. The supercomputer executed 1.09 quadrillion floating point operations per second during an earthquake simulation, and the team of computer scientists, mathematicians, and geophysicists involved in the virtual experiment credit the optimization of the SeisSol software with making it possible. The speed of calculations increased by a factor of five, and the earthquake simulation software maintained this unusually high performance level throughout the entire three-hour simulation run using all of SuperMUC's 147,456 processor cores. The extensive optimization and complete parallelization of the 70,000 lines of SeisSol code allowed a peak performance of up to 1.42 petaflops, which corresponds to 44.5 percent of SuperMUC's theoretical available capacity and makes it one of the most efficient simulation programs of its kind. "Thanks to the extreme performance now achievable, we can run five times as many models or models that are five times as large to achieve significantly more accurate results," says Ludwig Maximilian University's Christian Pelties. "Our simulations are thus inching ever closer to reality."

Seagate Introduces world's fastest 6TB hard drive

In early April 2014, Seagate also launched its fastest, enterprise-level 6 TB HDD to the world after HGST's helium 6TB HDD. Seagate's 6 TB HDD, ST6000NM0004, is priced at around $700 with 7200 rpm and 128 MB cache. Disk platters spin in helium and air (oxygen and nitrogen). Seagate 6TB HDD Enterprise HGST Helium 6TB HDD Enterprise Capacity 6 TB SATA 6Gb/s Spin Speed 7200 RPM Cache 128 MB Form Factor 3.5-inch HDD

New Supercomputer Uses 281 terabytes of SSDs Instead of DRAM and Hard Drives (11/04/13)

Lawrence Livermore National Laboratory (LLNL) this month is deploying Catalyst, a new supercomputer that uses solid-state drive (SSD) storage as an alternative to dynamic random access memory and hard drives, and delivers a peak performance of 150 teraflops. Catalyst has 281 terabytes of total SSD storage and is configured as a cluster broken into 324 computing units, each of which has two 12-core Xeon E5-2695v2 processors, totaling 7,776 central processing unit cores. Catalyst is built around the Lustre file system, which helps break bottlenecks and improves internal throughput in distributed computing systems. "As processors get faster with every generation, the bottleneck gets more acute," says Intel's Mark Seager. He notes that Catalyst offers a throughput of 512GB per second, which is the same as LLNL's Sequoia, the world's third-fastest supercomputer. Although Catalyst's peak performance is nowhere close to the world's fastest high-performance computers, its use of SSD technology is noteworthy. Experts say SSDs are poised for widespread enterprise adoption as they consume less energy and are becoming more reliable. For example, faster SSDs increasingly are replacing hard drives in servers to improve data access rates, and they also are being used in some servers as cache, where data is temporarily stored for quicker processing.

IBM Scientists Show Blueprints for Brain-Like Computing (08/08/13)

IBM researchers have created TrueNorth, a computer architecture designed to work more like the human brain. The architecture relies on complex simulations that could lead to a new generation of machines that function more like biological brains. The researchers used TrueNorth to demonstrate a way to use chips with neurosynaptic cores for specific tasks, such as building a more efficient biologically-inspired artificial retina. Unlike conventional computer architectures, TrueNorth stores and processes information in a distributed, parallel way, like the neurons and synapses in a brain. The researchers also developed software that runs on a conventional supercomputer but simulates the functioning of a massive network of neurosynaptic cores. The digital neurons mimic the independent nature of biological neurons, developing different response times and firing patterns in response to input from neighboring neurons. TrueNorth programs are written using special blueprints called corelets, each of which specifies the basic functioning of a network of neurosynaptic cores. TrueNorth comes with a library of 150 pre-designed corelets, each for a specific task. The researchers say the technology could eventually be incorporated into smartphones and automobiles. "We are extending the boundaries of what computers can do efficiently," says IBM's Dharmendra S. Modha, the project's lead researcher.

Simulating Human Brain 8/2/13

Simulating 1 Second of Real Brain Activity Takes 40 Minutes and 83K Processors The world's fourth-fastest supercomputer needed 40 minutes to simulate one second of actual brain activity on a network equivalent to 1 percent of a brain's neural network. A team of Japanese and German researchers were behind the effort to simulate the activity of 1.73 billion nerve cells connected to 10.4 trillion synapses, the largest-ever simulation of neural activity in the human brain. The simulation involved 82,944 processors on the K supercomputer and 1 petabyte of memory, amounting to 24 bytes per synapse. If computing time scales linearly with the size of the network, it would take nearly two and a half days to simulate 1 second of activity for an entire brain.

Multi-Gigahertz FPGA Signal Processing

Design teams from Xilinx and Synopsys know the importance of creating parallel architectures to accelerate signal processing applications on FPGA devices. In this article, learn how an FPGA clocked at 500MHz can support FFTs with gigasample per second data throughput rates.

20 Great Years of Linux and Supercomputers 07/29/13)

In the culmination of a steady rise to dominance over the past 20 years, Linux is now the operating system used on 95.2 percent of the world's 500 fastest supercomputers, according to the most recent Top500 supercomputer rankings. Linux debuted on the Top500 list in 1998, consistently dominated the top 10 over the past decade, and has accounted for more than 90 percent of the list since June 2010, according to the Linux Foundation. "Linux [became] the driving force behind the breakthroughs in computing power that have fueled research and technological innovation," says the Linux Foundation.

ARM unveils new video cores at Computex 6/5/2013

The Cortex-A12 processor core and the Mali-T622 GPU address the mid-range performance ground between its Cortex-A9 and A15. The processors were designed with the 28nm node in mind and have support from Globalfoundries for its 28-SLP process and for TSMC's 28HPM process.

Google and NASA Launch Quantum Computing AI Lab 05/16/13

The U.S. National Aeronautics and Space Administration (NASA), Google, and the Universities Space Research Association (USRA) are launching the Quantum Artificial Intelligence Lab to explore the use of quantum computing to advance the machine-learning branch of artificial intelligence. The D-Wave Two quantum computer will be installed at the NASA Advanced Supercomputing Facility at the Ames Research Center, slated for use in government, industrial, and university research later this year. Although Google intends to use the D-Wave Two to refine its Web search and speech-recognition technology, university researchers are likely to use it for disease and climate models. NASA uses quantum computing to model space weather, simulate planetary atmospheres, analyze huge volumes of mission data, and other functions. Through the Quantum Artificial Intelligence Lab, USRA will invite researchers from around the country to use the D-Wave Two, and 20 percent of its computing time will be open for free to the university community via competitive selection. NASA and Google will evenly divide the remaining computing time.

Plastic film turns mobile display into 3D screen 4/4/2013

A nanoengineered screen protector from A*STAR IMRE converts ordinary mobile device displays into 3D screens. The glasses-free 3D accessory measures less than 0.1 mm in thickness.  

Chip analyses cancer-specific microRNAs 3/12/2013

Researchers at the Riken Advanced Science Institute developed a self-powered microfluidic chip that can detect cancer-specific microRNAs in a drop of patient blood in as little as 20 minutes. The presence of cancer biomarkers can be detected by fluorescence in the main microfluidic channel. Researchers from the Institute of Bioengineering and Nanotechnology have engineered a device that mimics the natural tissue environment of the liver to test new drugs introduced and predict the level of toxicity. Using this tool, companies can potentially speed up drug development process and reduce cost of manufacturing.

PS4 will run on AMD's x86 platform 2/26/2013

Sony, which developed CELL for the PlayStation 2, is now dropping the processor as well as the Nvidia graphics chip which powered the PlayStation 3. According to Mark Cerny, lead system architect of PS4, Sony's latest gaming console will run a new AMD accelerated processing unit which integrates an x86 CPU and GPU on the same die.

Small feature size, another process 2/7/13

CEO Warren East says ARM is willing to assist STMicroelectronics make a success of its fully-depleted silicon-on-insulator (FDSOI) chip manufacturing process, but that it is up to ST to make the process more widely available. FDSOI has emerged at the 28nm node as a potential chip manufacturing alternative to bulk planar CMOS, which is being pushed to 20nm by foundries such as TSMC and Globalfoundries.

Bigger and faster Solid State Drives 12/15/12

OCZ 480GB Vertex 3 SATA 6Gb/s 2.5-Inch Performance Solid State Drive (SSD) with Max 530MB/s Read and Max 4KB Write 40K IOPS Price: $397.72 & this item ships for FREE with Super Saver Shipping. Details NEWER MODEL: OCZ Technology 512GB Vertex 4 Series SATA 6.0 GB/s 2.5-Inch Solid State Drive (SSD) With Industry's Highest 120K IOPS And 5-Year Warranty - VTX4-25SAT3-512G $474.99 Product Features Size: 480 GB | Style: 9mm Sustained Sequential Read: Up to 530 MB/s (SATA 6Gbps) Up to 280 MB/s (SATA 3Gbps) Sustained Sequential Write: Up to 450 MB/s (SATA 6Gbps) Up to 260 MB/s (SATA 3Gbps) MTBF:2,000,000 hours ECC Recovery: Up to 55 bits correctable per 512-byte sector (BCH) 4K Random Read AS-SSD: 56,000 IOPS (220 MB/s)4K Random Write AS-SSD: 38,000 IOPS (150 MB/s) Bundled with 3.5" desktop adapter bracket Fully compliant with Serial ATA International Organization: Serial ATA Revision 3.0.Fully compliant with ATA/ATAPI-8StandardNative On MacOSX: The new iMac desktops have what is called a 'Fusion Drive' option. Since it is integrated and appears as one drive to the user but two to the OS, Apple can play the tricks: Fusion Drive combines 128GB of super fast flash storage with a traditional hard drive. It automatically and dynamically moves frequently used files to flash for quicker access. With Fusion Drive in your iMac, booting is up to 1.7 times faster, and copying files and importing photos are up to 3.5 times faster. Over time, as the system learns how you work, Fusion Drive makes your Mac experience even better. All while letting you store your digital life on a traditional, roomy hard drive.

A Leap Forward in Brain-Controlled Computer Cursors

Stanford University (11/18/12) Kelly Servick Stanford University researchers have developed ReFIT, an algorithm that improves the speed and accuracy of neural prosthetics that control computer cursors. In a side-by-side comparison, the cursors controlled by the ReFIT algorithm doubled the performance of existing systems and approached the performance of a real arm. "These findings could lead to greatly improved prosthetic system performance and robustness in paralyzed people, which we are actively pursuing as part of the FDA Phase-I BrainGate2 clinical trial here at Stanford," says Stanford professor Krishna Shenoy. The system uses a silicon chip that is implanted in the brain. The chip records "action potentials" in neural activity from several electrode sensors and sends the data to a computer. The researchers want to understand how the system works under closed-loop control conditions in which the computer analyzes and implements visual feedback taken in real time as the user neurally controls the cursor toward an onscreen target. The system can make adjustments in real time while guiding the cursor to a target, similar to how the hand and eye work in tandem to move a mouse cursor. The researchers designed the algorithm to learn from the user's corrective movements, allowing the cursor to move more precisely than in other systems.

Cray Titan Supercomputer Now World's Fastest November 14, 2012

Cray's huge Titan supercomputer, housed at the Oak Ridge National Laboratory in Tennessee, has displaced IBM's Sequoia as the world's fastest supercomputer. Titan, a massive XK7 system powered by AMD Opteron processors and Nvidia GPU accelerators, hit a performance of 17.59 petaflops, outdistancing Sequoia's 16.32 petaflops.

Intel making 22nm chips

By Chris Preimesberger September 14, 2012 from eWEEK At this year's IDF conference, Intel had a few "firsts" to talk about--the first Atom-powered smartphones, for example. Also on display were Ultrabook laptops and desktop PCs with new, more powerful and power-efficient 22nm chips.

Future of displays

video

5.5GHz is here

August 29, 2012 Big Blue calls its new mainframe--the zEnterprise EC12--the most powerful and technologically advanced version of an IBM system that has been the linchpin of enterprise computing for 48 years. The new mainframe is one of the most secure enterprise systems ever and boasts the world's fastest chip running at 5.5GHz, IBM said.

IBM beats Japan at 16 Petaflops

IBM's Sequoia supercomputer system, based at the U.S. Lawrence Livermore National Laboratory (LLNL), recently carried out 16 petaflops per second, breaking the world record set by Japan's K Computer last year, and claiming first place in the latest TOP500 list, which was released today at the 2012 International Supercomputing Conference in Hamburg, Germany. The supercomputer field has long been dominated by U.S. technology, but recently international challengers have made great strides. "It's good to see a little competition going back and forth," says Cray CEO Peter Ungaro. "I fully expect Japan and China and Europe to strike back." IBM's Sequoia system is based on a design called Blue Gene/Q, which uses chips the company designed to boost performance while saving energy. Each chip has 16 processors and is based on a technology called Power that has been used in the company's servers for many years. Supercomputers based on IBM's Blue Gene/Q design took four of the top 10 spots on the latest TOP500 list. LLNL researchers plan to use Sequoia to improve simulations used to judge the effectiveness and safety of nuclear weapons.

Japan beats 10 Petaflop

11/4/2011 latest supercomputer

Exascale supercomputers still have challenges

2011 design, software and measurement

Exascale supercomputers by 2020?

exascale challenge and possibility

Intel SDD still expensive, price coming down

Eweek 11/16/2010

Faster than SDD may be available in future

Racetrack Memory using nanowire

IBM shipping 5.2GHz quad core

AMD 12 and Intel 8 multicore

AMD Unveils 12-Core Opterons with HP, Dell, Acer Chip maker AMD officially launches its new 12-core Opteron processor with help from Hewlett-Packard, Dell and even Acer, which is looking to now expand into the server market. AMD's new Opteron chip comes along as Intel is expected to release its "Nehalem EX" Xeon chip later this week. Both processors target the data center. Typically multiple chips with special signals that allow the multiple chips to act as a single multicore chip. Possibly better memory bandwidth, cache consistency? 3/30/2010 web page

Intel Core i7-980X six core 3.3GHz with overclocking

single chip: hardwaresecrets.com

Facebook usage and storage data

facebook.txt

IBM Roadrunner petaflop computer

Super computer sets record, By John Markoff, Published: June 9, 2008 (In 2009 Jaguar ran at 1.7 petaflops, 1.7*10^15 instructions per second) SAN FRANCISCO: An American military super computer, assembled from components originally designed for video game machines, has reached a long-sought-after computing milestone by processing more than 1.026 quadrillion calculations per second. The new machine is more than twice as fast as the previous fastest super computer, the IBM BlueGene/L, which is based at Lawrence Livermore National Laboratory in California. The new $133 million super computer, called Roadrunner in a reference to the state bird of New Mexico, was devised and built by engineers and scientists at IBM and Los Alamos National Laboratory, based in Los Alamos, New Mexico. It will be used principally to solve classified military problems to ensure that the nation's stockpile of nuclear weapons will continue to work correctly as they age. The Roadrunner will simulate the behavior of the weapons in the first fraction of a second during an explosion. Before it is placed in a classified environment, it will also be used to explore scientific problems like climate change. The greater speed of the Roadrunner will make it possible for scientists to test global climate models with higher accuracy. To put the performance of the machine in perspective, Thomas D'Agostino, the administrator of the National Nuclear Security Administration, said that if all six billion people on earth used hand calculators and performed calculations 24 hours a day and seven days a week, it would take them 46 years to do what the Roadrunner can in one day. The machine is an unusual blend of chips used in consumer products and advanced parallel computing technologies. The lessons that computer scientists learn by making it calculate even faster are seen as essential to the future of both personal and mobile consumer computing. The high-performance computing goal, known as a petaflop, one thousand trillion calculations per second, has long been viewed as a crucial milestone by military, technical and scientific organizations in the United States, as well as a growing group including Japan, China and the European Union. All view super computing technology as a symbol of national economic competitiveness. The Roadrunner is based on a radical design that includes 12,960 chips that are an improved version of an IBM Cell microprocessor, a parallel processing chip originally created for Sony's Play Station 3 video-game machine. The Sony chips are used as accelerators, or turbochargers, for portions of calculations. The Roadrunner also includes a smaller number of more conventional Opteron processors, made by Advanced Micro Devices, which are already widely used in corporate servers. In addition, the Roadrunner will operate exclusively on the Fedora Linux operating from Red Hat.

NASA 245 million pixel display


From technews@hq.acm.org Mon Mar 31 16:06:01 2008

NASA Builds World's Largest Display
Government Computer News (03/27/08) Jackson, Joab

NASA's Ames Research Center is expanding the first Hyperwall, the
world's largest high-resolution display, to a display made of 128 LCD
monitors arranged in an 8-by-16 matrix, which will be capable of
generating 245 million pixels. Hyperwall-II will be the largest
display for unclassified material. Ames will use Hyperwall-II to
visualize enormous amounts of data generated from satellites and
simulations from Columbia, its 10,240-processor supercomputer. "It
can look at it while you are doing your calculations," says Rupak
Biswas, chief of advanced supercomputing at Ames, speaking at the
High Performance Computer and Communications Conference. One gigantic
image can be displayed on Hyperwall-II, or more than one on multiple
screens. The display will be powered by a 128-node computational
cluster that is capable of 74 trillion floating-point operations per
second. Hyperwall-II will also make use of 1,024 Opteron processors
from Advanced Micro Devices, and have 128 graphical display units and
450 terabytes of storage.


Intel integrated memory controller




Yet to be seen: Will Intel shed its other dinosaur,
the North Bridge and South Bridge concept, in order to
achieve integrated IO?




Silicon Nanophtonic Waveguide


How long will it be before your computer is really and truly outdated??
Who will be the first to have their own supercomputer?
 
"IBM researchers reached a significant milestone in the quest to send
information between the "brains" on a chip using pulses of light through
silicon instead of electrical signals on copper wires.
The breakthrough -- a significant advancement in the field of
"Silicon Nanophotonics" -- uses pulses of light rather than electrical
wires to transmit information between different processors on a single chip,
significantly reducing cost, energy and heat while increasing communications
bandwidth between the cores more than a hundred times over wired chips.
The new technology aims to enable a power-efficient method to connect
hundreds or thousands of cores together on a tiny chip by eliminating
the wires required to connect them. Using light instead of wires to send
information between the cores can be as much as 100 times faster and use
10 times less power than wires, potentially allowing hundreds of cores
to be connected together on a single chip, transforming today's large super
computers into tomorrow's tiny chips while consuming significantly less power.
  
IBM's optical modulator performs the function of converting a digital
electrical signal carried on a wire, into a series of light pulses,
carried on a silicon nanophotonic waveguide. First, an input laser beam
(marked by red color) is delivered to the optical modulator.
The optical modulator (black box with IBM logo) is basically a very fast
"shutter" which controls whether the input laser is blocked or transmitted
to the output waveguide. When a digital electrical pulse
(a "1" bit marked by yellow) arrives from the left at the modulator,
a short pulse of light is allowed to pass through at the optical output
on the right. When there is no electrical pulse at the modulator (a "0" bit),
the modulator blocks light from passing through at the optical output.
In this way, the device "modulates" the intensity of the input laser beam,
and the modulator converts a stream of digital bits ("1"s and "0"s)
from electrical signals into light pulses. December 05, 2007"

http://www.flixxy.com/optical-computing.htm


Seagate crams 329 gigabits of data per square inch


Seagate crams 329 gigabits of data per square inch
http://ct.zdnet.com/clicks?t=73361625-e808f46de0195a86f73d2cce955257f9-bf&brand=ZDNET&s=5

Seagate has announced that it is shipping the densest 3.5 inch desktop
hard drive available - cramming an incredible 329 gigabits per square inch.

The new drives, the Barracuda 7200.12, offers 1TB of storage on two 
platters and the high density is achieved by using Perpendicular 
Magnetic Recording technology. Seagate hopes to add more platters 
later this year in order to boost capacity even further.

The Barracuda 7200.12 is a 7,200RPM drive that has a 3Gbps serial ATA
(SATA) interface that offers a sustained transfer rate of up to 160MB/s
and a burst speed of 3Gbps.

Prior to the Barracuda 7200.12 the Seagate drive with the greatest
density was the Barracuda 7200.11 that offered 1.5GB of storage across 
four platters.

Too Many Cores, when a big number, can be worse


More Chip Cores Can Mean Slower Supercomputing, Sandia Simulation Shows.
Sandia National Laboratories (01/13/09) Singer, Neal

Simulations at Sandia National Laboratory have shown that increasing
the number of processor cores on individual chips may actually worsen
the performance of many complex applications. The Sandia researchers
simulated key algorithms for deriving knowledge from large data sets,
which revealed a significant increase in speed when switching from
two to four multicores, an insignificant increase from four to eight
multicores, and a decrease in speed when using more than eight
multicores. The researchers found that 16 multicores were barely able
to perform as well as two multicores, and using more than 16
multicores caused a sharp decline as additional cores were added. The
drop in performance is caused by a lack of memory bandwidth and a
contention between processors over the memory bus available to each
processor. The lack of immediate access to individualized memory
caches slows the process down once the number of cores exceeds eight,
according to the simulation of high-performance computing by Sandia
researchers Richard Murphy, Arun Rodrigues, and Megan Vance. "The
bottleneck now is getting the data off the chip to or from memory or
the network," Rodrigues says. The challenge of boosting chip
performance while limiting power consumption and excessive heat
continues to vex researchers. Sandia and Oak Ridge National
Laboratory researchers are attempting to solve the problem using
message-passage programs. Their joint effort, the Institute for
Advanced Architectures, is working toward exaflop computing and may
help solve the multichip problem.

MRI at nano scale


Microscope Has 100 Million Times Finer Resolution Than Current MRI

An artistic view of the magnetic tip (blue) interacting with the 
virus particles at the end of the cantilever.
Scientists at IBM Research, in collaboration with the Center for 
Probing the Nanoscale at Stanford University, have demonstrated 
magnetic resonance imaging (MRI) with volume resolution 100 million 
times finer than conventional MRI. This signals a significant step 
forward in tools for molecular biology and nanotechnology by offering 
the ability to study complex 3D structures at the nanoscale.

By extending MRI to such fine resolution, the scientists have created 
a microscope that may ultimately be powerful enough to unravel the 
structure and interactions of proteins, paving the way for new 
advances in personalized healthcare and targeted medicine.

This advancement was enabled by a technique called magnetic resonance 
force microscopy (MRFM), which relies on detecting ultrasmall 
magnetic forces. In addition to its high resolution, the imaging 
technique is chemically specific, can "see" below surfaces and, 
unlike electron microscopy, is non-destructive to sensitive 
biological materials.

The researchers use MRFM to detect tiny magnetic forces as the sample 
sits on a microscopic cantilever - essentially a tiny sliver of 
silicon shaped like a diving board. Laser interferometry tracks the 
motion of the cantilever, which vibrates slightly as magnetic spins 
in the hydrogen atoms of the sample interact with a nearby nanoscopic 
magnetic tip. The tip is scanned in three dimensions and the 
cantilever vibrations are analyzed to create a 3D image.

Parallel Programming a necessity 2006

With us again today is James Reinders, a senior engineer at Intel. DDJ: James, how is programming for a few cores different from programming for a few hundred cores? JR: As we go from a few cores to hundred, two things happen: 1. Scaling is everything and single core performance is truly uninteresting in comparison, and; 2. Shared memory becomes tougher and tougher to count on, or disappears altogether. For programmers, the shift to "Think Parallel" is not complete until we truly focus on scaling in our designs instead of performance on a single core. A program which scales poorly, perhaps because it divides work up crudely, can hobble along for a few cores. However, running a program on hundreds of cores will reveal the difference between hobbling and running. Henry Ford learned a lot about automobile design while doing race cars before he settled on making cars for the masses. Automobiles which ran under optimal conditions at slower speeds did not truly shake out a design the way less optimal and high speed racing condition did with a car. Likewise, a programmer will find designing programs for hundreds of cores to be a challenge. I think we already know more than we think. It is obvious to think of supercomputer programming, usually scientific in nature, as having figured out how their programs can run in parallel. But, let me suggest that Web 2.0, is highly parallel -- and is a model which helps with the second issue in moving to hundreds of cores. Going from a few cores to many core means several changes is in the hardware which impact software a great deal. The biggest change is in memory because with a few cores you can assume every core has equal access to memory. It turns out having equal access to memory simplifies many things for programmers, many ugly things do not need to be worried about. The first step away from complete bliss is when instead of equal access (UMA) you move to unequal but access is still available (NUMA). In really large computers, memory is usually broken up (distributed) and is simply not globally available to all processors. This is why in distributed memory machines programming is usually done with messages instead of using shared memory. Programs can easily move from UMA to NUMA, the only real issue is performance -- and there will be countless tricks in very complex hardware to help mask the need for tuning. There will, nevertheless, be plenty of opportunity for programmers to tune for NUMA the same way we tune for caches today. The gigantic leap, it would seem, is to distributed memory. I have many thoughts on how that will happen, but that is a long ways off -- sort of. We see it already in web computing -- Web 2.0, if you will, is a distributed programming model without shared memory -- all using messages (HTML, XML, etc.) So maybe message passing of the supercomputer world has already met its replacement for the masses: Web 2.0 protocols. DDJ: Are compilers ready to take advantage of these multi-core CPUs? JR: Compilers are great at exploiting parallelism and terrible at discovering it. When people ask the question you did of me, I find they are usually wondering about automatic compilers which take my program of today, and magically find parallelism and produce great multi-core binaries. That is simply not going to happen. Every decent compiler will have some capability to discover parallel automatically, but it will simple not be enough. The best explanation I can give is this: it is a issue of algorithm redesign. We don't expect a compiler to read in a bubble sort function and compile it into a quick sort function. That would be roughly the same as reading most serial programs and compiling into a parallel program. The key is to find the right balance of how to have the programmer express the right amount of the algorithm and the parallelism, so the compiler can take it the rest of the way. Compiler have done a great job exploiting SIMD parallelism for programs written using vectors or other syntaxes designed to make the parallel accessible enough for the compiler to not have too much difficulty to discover it. In such cases, compilers do a great job exploiting MMX, SSE, SSE2, etc. The race is on to find the right balance of programming practices and compiler technology. While the current languages are not quite enough, we've seen small additions like OpenMP yield big results for a class of applications. I think most programming will evolve to use small changes which open up the compiler to seeing the parallelism. Some people advocate whole new programming languages, which allow much more parallelism to be expressed explicitly. This is swinging the pendulum too far for most programmers, and I have my doubts we any one solution is general purpose enough for widespread usage. DDJ: Earlier in our conversation, I gave you an I.O.U. for the beer you asked for. Will you share with readers what Prof. Norman R. Scott told you and why you blame it for having you so confident in the future of computing? JR: Okay, but since I've been living in the Pacific Northwest some time you need to know that I'm not likely to drink just any beer. In 1987, my favorite college professor was retiring from teaching at the University of Michigan. He told us that when he started in electronics that he would build circuits with vacuum tubes. He would carefully tune the circuit for that vacuum tube. He thought it was wonderful. But, if the vacuum tube blew he could get a new one but would have to retune the circuit to the new vacuum tube because they were never quite the same. Now this amazed us, because most of us helped our dad's buy "standard" replacement vacuum tubes at the corner drug store for our televisions when we were kids. So the idea of vacuum tubes not being standard and interchangeable seemed super old to us, because even standard vacuum tubes were becoming rare specialty items at Radio Shack (but also perfected to have lifetime guarantees). Next, Prof. Scott noted that Intel had announced a million transistor processor recently. He liked to call that VLSII (Very Large Scale Integration Indeed!). Now for the punchline: Je said his career spanned inconsistent vacuum tubes to a million transistors integrated on a die the size of a fingertip. He asked if we thought technology (or our industry) was moving FASTER or SLOWER than during his career? We all said "faster!" So he asked: "Where will the industry be when your careers end since we will start with a million transistors on a chip?" I've always thought that was the scariest thing I ever heard. It reminds me still to work to keep up -- lest I be left behind (or run over). So, when people tell me that the challenge before us is huge and never before seen -- and therefore insurmountable -- I'm not likely to be convinced. You can blame Prof. Scott for my confidence that we'll figure it out. I don't think a million vacuum tubes equivalents on a finger tip seemed like anything other than fantasy to him when he started his career -- and now we have a thousand times that. So I'm not impressed when people say we cannot figure out how to use a few hundred cores. I don't think this way because I work at Intel, I think this way in no small part because of Prof. Scott. Now, I might work at Intel because I think this way. And I'm okay with that. But let's give credit to Prof. Scott, not Intel for why I think what I do. DDJ: James, thanks for taking time over the past few weeks for this most interesting conversation. JR: You're welcome.
Articles are edited to fit the purpose of this page.
All copyrights belong to the original source.

Other links