Saturday, March 5, 2011

The next operating system with Indian Hands on it....

MIT news info....
                                Operating systems for multicore chips will need more information about their own performance — and more resources for addressing whatever problems arise.
Larry Hardesty, MIT News Office.


An operating system, whether Windows, the Apple OS, Linux or any other, is software that mediates between applications, like word processors and Web browsers, and those rudimentary bit operations. Like everything else, operating systems will have to be reimagined for a world in which computer chips have hundreds or thousands of cores.

A computer with hundreds or thousands of cores tackling different aspects of a problem and exchanging data offers much more opportunity than an ordinary computer does for something to go badly wrong. At the same time, it has more resources to throw at any problems that do arise. So, says Anant Agarwal, who leads the Angstrom project, a multicore operating system needs both to be more self-aware — to have better information about the computer’s performance as a whole — and to have more control of the operations executed by the hardware.

But crucial to the Angstrom operating system — dubbed FOS, for factored operating system — is a software-based performance measure, which Agarwal calls “heartbeats.” Programmers writing applications to run on FOS will have the option of setting performance goals: A video player, for instance, may specify that the playback rate needs to be an industry standard 30 frames per second. Software will automatically interpret that requirement and emit a simple signal — a heartbeat — each time a frame displays. If the heartbeat fell below 30, FOS could allocate more cores to the video player. Alternatively, if system resources are in short supply, it could adopt some computational short cuts in order to get the heartbeat back up again.


Computer-science professor Martin Rinard’s group has been investigating cases where accuracy can be traded for speed and has developed a technique it calls “loop perforation.” A loop is an operation that’s repeated on successive pieces of data — like, say, pixels in a frame of video — and to perforate a loop is simply to skip some iterations of the operation. Graduate student Hank Hoffmann has been working with Agarwal to give FOS the ability to perforate loops on the fly.

Since Angstrom has the luxury of building a chip from the ground up, it’s also going to draw on work that Kaashoek has done with assistant professor Nickolai Zeldovich to secure operating systems from outside attack. An operating system must be granted some access to primitive chip-level instructions — like Kaashoek’s cache swap command and cache address request. But Kaashoek and Zeldovich have been working to minimize the number of operating-system subroutines that require that privileged access. The fewer routes there are to the chip’s most basic controls, the harder they are for attackers to exploit.



 Computer-science professor Srini Devadas has done work on electrical data connections that Angstrom is adopting , but outside Angstrom, he’s working on his own approach to multicore operating systems that in some sense inverts Kaashoek’s primitive cache-swap procedure. Instead of moving data to the cores that require it, Devadas’ system assigns computations to the cores with the required data in their caches. Sending a core its assignment actually consumes four times as much bandwidth as swapping the contents of caches does, so it also consumes more energy. But in multicore chips, multiple cores will frequently have cached copies of the same data. If one core modifies its copy, all the other copies have to be updated, too, which eats up both energy and time. By reducing the need for cache updates, Devadas says, a multicore system that uses his approach could outperform one that uses the traditional approach. And the disparity could grow if chips with more and more cores end up caching more copies of the same data.

Keep Innovating......














Friday, February 25, 2011

Intel introduces Thunderbolt : A USB Killer



What is Thunderbolt ?

It is an I/O port technology just like USB or Firewire. It was developed by Intel under the code name Light Peak. It lets you move data to and from peripherals up to 20 times faster than with USB 2.0, more than 12 times faster than with FireWire 800 and TWICE as fast as USB 3.0. Apart from this, with two independent channels, a full 10 Gbps transfer rate will be made available for both input and output !! Also You can run multiple devices from the same port !!



Thunderbolt I/O technology provides native support for Mini DisplayPort displays. It also supports DisplayPort, DVI, HDMI and VGA displays through the use of existing adapters. You can connect your High Defenition display devices using Thunderbolt to your laptop. It would offer low latency and highly accurate time synchronization features.
With Thunderbolt enabled products, video editing and sharing using Intel Quick Sync Video technology is even faster and easier.
Backup and sharing will be really fast. The Thunderbolt site claims that you can transfer a full-length HD movie in less than 30 seconds and backup 1 year of continuous MP3 playback in just over 10 minutes !!

Features
  • Dual-channel 10 Gbps per port.
  • Support for multiple connections from same port. (with no loss in performance !!)
  • Support for PCI-e and DisplayPort protocols.
  • Support for chained devices.
  • Uses native protocol software drivers. (So its ready to use on any OS!!)
  • Power line for devices to be powered.
  • Small Port (goes with thinnest of laptops)
  • Electrical or optical cables supported.

As Thunderbolt was introduced in MAC.All MacBook Pros come with the new Thunderbolt connector, which combines the existing Mini DisplayPort connector with the ability to attach data devices such as hard drives and video hardware. Several devices can be connected to a single port. This is the first use of Intel’s “Light Peak” technology, renamed Thunderbolt for Apple. For now it can be used as a Mini DisplayPort connector with existing displays and converter cables to VGA, DVI and HDMI displays. But in the future it will connect many other types of devices at speeds much greater than USB2, Firewire or even USB3.



Performance

Thunderbolt as comapred to USB 2.0, FireWire 800 n USB 3.0 is much more faster and efficient...











 
     
     
     



    Sunday, February 20, 2011

    Server Farm


    What is a Server ?

    The server is a physical computer dedicated to running one or more such services, to serve the needs of programs running on other computers on the same network.
    A server computer is a computer, or series of computers, that link other computers or electronic devices together. They often provide essential services across a network, either to private users inside a large organization or to public users via the internet. For example, when you enter a query in a search engine, the query is sent from your computer over the internet to the servers that store all the relevant web pages. The results are sent back by the server to your computer.Many servers have dedicated functionality such as web servers, print servers, and database servers. Enterprise servers are servers that are used in a business context.



    Proxy Server

    This type of server can turn Internet activities untraceable. It will access the Internet on a user’s behalf without revealing the user’s information. In essence, this type of server can secure a user by not revealing the computer’s information.
    a proxy server is defined as a system or a program that process the requests of clients and by forwarding queries and request to other systems or servers. There are many types of proxy server, and one of this is the anonymous proxy server, or anonymizer. This type of server can turn Internet activities untraceable. It will access the Internet on a user’s behalf without revealing the user’s information. In essence, this type of server can secure a user by not revealing the computer’s information.
    Proxy servers can serve a lot of purpose, i. e. lowering risks. They can also be utilized in rendering violations in human rights (i.e. expression). With a server, users need not worry losing their reputation. These servers also prevent false identity claim, and they can also protect browsing histories from being exposed to the public. Despite the protection it offers; anonymous servers are not perfectly safe.




    Server Farm

    Many servers have dedicated functionality such as web servers, print servers, and database servers. Enterprise servers are servers that are used in a business context.
    Servers come in all sizes and shapes. A few years ago, student competitions produced matchbox-size Web servers, and a home Internet gateway the size of a hard-back book might contain NAT, DHCP and Web servers along with a router. Companies also offer smaller personal servers, for those who want to carry their data and programs with them.
    Many Web sites are served by a single PC; however, high transaction volumes might require several Web servers, along with database, application, and other types of server. This diagram shows a server farm with four Web servers:


    Server farms are commonly used for cluster and cloud computing. Many modern supercomputers comprise giant server farms of high-speed processors connected by either Gigabit Ethernet or custom interconnects such as Infiniband or Myrinet.  


    Google Server Farm




      Apple Server Farm

    A Server Farm consists of several Server Rooms...These are various Pictures of Facebook server room.....