Thursday, February 26, 2015

‪#‎Underwater‬ Wild Life HD ... ALLAH .... WoW .... LoL

The Cisco Unified Computing System UCS, A Future-Proof Investment

A Future-Proof Investment

The Cisco Unified Computing System gives data centers room to scale while anticipating future technology developments, helping increase return on investment today while protecting that investment over time. The blade server chassis, power supplies, and midplane are capable of handling future servers with even greater processing capacity; future, higher-power CPUs; and future 40 Gigabit Ethernet standards that are expected to bring a total of 80 Gbps of bandwidth to each half-width blade server.

System Overview

From a high-level perspective, the Cisco Unified Computing System consists of one or two Cisco UCS 6100 Series Fabric Interconnects and one or more Cisco UCS 5100 Series Blade Server Chassis populated with Cisco UCS B-Series Blade Servers. Cisco UCS Manager is embedded in the fabric interconnects, and it supports all server chassis as a single, redundant management domain.

Each chassis requires at least one 10 Gigabit unified fabric connection to a Cisco UCS 6100 Series Fabric Interconnect. A maximum configuration would occupy all 40 fixed ports of a redundant pair of Cisco UCS 6140XP Fabric Interconnects with 40 blade server chassis and a total of up to 320 blade servers. A typical configuration would have 2 to 4 unified fabric connections from each chassis to each of an active-active pair of switches.

For example, Figure 2 illustrates 36 blade server chassis connected to an active-active pair of fabric interconnects that support failover. Uplinks from the two fabric interconnects deliver LAN traffic to the LAN aggregation or core layer and SAN traffic through native Fibre Channel to either of SAN A or SAN B.

Figure 2. Example Cisco Unified Computing System with 36 Cisco UCS5100 Series Blade Server Chassis and 2 Cisco UCS 6140XP Series Fabric Interconnects

Figure 3 shows the components that make up the Cisco Unified Computing System:

 The unified fabric is supported by Cisco UCS 6100 Series Fabric Interconnects. The figure shows a Cisco UCS 6120XP Fabric Interconnect with 20 fixed ports and one expansion module slot.

 Cisco UCS Manager runs within the two Cisco UCS 6100 Series Fabric Interconnects and manages the system as a single, unified, management domain. The management software is deployed in a clustered active-passive configuration so that the management plane remains intact even through the failure of an interconnect.

 The unified fabric is extended to each of up to 40 blade chassis through up to two Cisco UCS 2100 Series Fabric Extenders per blade chassis, each supporting up to four unified fabric connections. Each chassis must have at least one connection to a parent Cisco UCS 6100 Series Fabric Interconnect.

Figure 3. The Cisco Unified Computing System Is Composed of Interconnects, Fabric Extenders, Blade Server Chassis, Blade Servers, CNAs, and Cisco Extended Memory Technology

 Up to eight Cisco UCS B-Series Blade Servers can be installed in a Cisco UCS 5100 Series Blade Server Chassis. The chassis supports half-width and full-width blades. Cisco UCS B-Series Blade Servers use Intel Xeon 5500 Series processors that deliver intelligent performance, automated energy efficiency, and flexible virtualization.

 Transparent access to the unified fabric is provided by one of three types of network adapters in a mezzanine card form factor optimized for different purposes: a virtual interface card that incorporates Cisco VN-Link technology and up to 128 virtual interface devices configured dynamically, converged network adapters (CNAs) that provide a fixed number of Ethernet and fibre channel over Ethernet (FCoE) connections and are compatible with existing Fibre Channel driver stacks, and a network interface designed to deliver efficient, high-performance 10 Gigabit Ethernet.

 Cisco Extended Memory Technology in the Cisco UCS B250 M1 Extended Memory Blade Server expands the memory footprint available to two-socket x86 servers. The extended memory blade server can support up to 384 GB of DDR3 memory with up to 48 industry-standard DIMMs.

Facebook's 6-pack: the first open modular switch platform OCP switch



Network-switch vendors will go broke if they are relying on Facebook for sales. Facebook now makes all its own network gear under the auspices of the Open Compute Project(OCP). From the OCP website: "A set of technologies that are disaggregated and fully open, allowing for rapid innovation in the network space. We aim to facilitate the development of network hardware and software -- together with trusted project validation and testing -- in a truly open and collaborative community environment."

Most who follow data-center networking figured it was a matter of time before Facebook would design and build all the network equipment needed in its data centers. The first piece of equipment to be redesigned was the Top Of Rack (TOR) switch. The new TOR switch was code-named Wedge -- of note is the powerful server Facebook design engineers added to Wedge's internal hardware. Facebook software developers dutifully created an operating system code-named FBOSS to run Wedge. It wasn't much of a stretch to realize Facebook had other things in mind for the Wedge hardware design and FBOSS.

The data-center network architecture succumbed next. Facebook introduced Fabric a few months after Wedge. Yuval Bachar, hardware networking engineer at Facebook, explained the importance of Fabric and Wedge, "For both projects, we broke apart the hardware and software layers of the stack and opened up greater visibility, automation, and control in the operation of our network."

The last piece of the puzzle

As tech pundits expected, Facebook's data-center network overhaul was not yet finished. "Even with all that progress, we still had one more step to take," said Bachar. "We had a TOR, a fabric, and the software to make it run, but we still lacked a scalable solution for all the modular switches in our fabric. So we built the first open modular switch platform."

The platform Balchar referred to in his Feb. 11, 2015 press release is known as 6-pack(Figure A). Facebook considers 6-pack a full mesh, dual backplane, non-blocking, two-stage switch with 12 independent switching elements. Each element can move an impressive 1.28 Tb per second. Bachar added, "We have two configurations: One configuration exposes 16x40GE ports to the front and 640G (16x40GE) to the back, and the other is used for aggregation and exposes all 1.28T to the back."

Components in a 6-pack

The modular design starts with "the line card", the component nearly identical to Wedge. "Each element runs its own operating system on the local server and is completely independent, from the switching aspects to the low-level board control and cooling system," continued Bachar. "This means we can modify any part of the system with no system-level impact, software or hardware."

The second 6-pack hardware module is called "the fabric card," and contains two line-card boards with their business ends facing the back of the 6-pack hardware platform. The fabric card's configuration allows a full mesh locally, meaning non-blocking connectivity is provided within the 6-pack switch. The fabric card also aggregates out-of-band management network traffic reachable via the external ports.

The two components come together in the 6-pack platform shown in Figure A. There are eight line cards and two Fabric cards. The design in the schematic below (Figure D) offers an idea of how the network modules interconnect. (In Figure D, BP is Backplane, Fabric is Fabric card, and LC is Line card.)

Figure D

 Image courtesy of Facebook

Bachar concluded his press release by mentioning that 6-pack, Wedge, and FBOSS are in production testing. And as promised, Facebook intends to contribute the 6-pack modular network switch design to the OCP.

Open Compute Project - Facebook



What is OCP?

The Open Compute Project initiative was announced in April 2011 by Facebook to openly share designs of data center products.[1] The effort came out of a redesign of Facebook's data center in Prineville, Oregon.[2] After two years, it was admitted that "the new design is still a long way from live data centers."[3] However, some aspects published were used in the Prineville center to improve the energy efficiency, as measured by the power usage effectiveness index defined by The Green Grid.[4]

Components of the Open Compute Project include:

  • Open Vault storage building blocks offer high disk densities, with 30 drives in a 2U Open Rack chassis designed for easy disk drive replacement. The 3.5 inch disks are stored in two drawers, five across and three deep in each drawer, with connections via serial attached SCSI.[6] Another design concept was contributed by Hyve Solutions, a division of Synnex in 2012.[7][8]
  • Mechanical mounting system: Open racks have the same outside width (600 mm) and depth as standard 19-inch racks, but are designed to mount wider chassis with a 537 mm width (about 21 inches). This allows more equipment to fit in the same volume and improves air flow. Compute chassis sizes are defined in multiples of an OpenU, which is 48 mm, slightly larger than the typical rack unit.
  • Data center designs for energy efficiency, include 277 VAC power distribution that eliminates one transformer stage in typical data centers. A single voltage (12.5 VDC) power supply designed to work with 277 VAC input and 48 VDC battery backup.[4]
  • On May 8, 2013, an effort to define an open network switch was announced.[9] The plan was to allow Facebook to load its own operating system software onto the switch. Press reports predicted that more expensive and higher-performance switches would continue to be popular, while less expensive products treated more like a commodity (using the buzzword "top-of-rack") might adopt the proposal.[10]
    A similar project for a custom switch for the Google platform had been rumored, and evolved to use the OpenFlow protocol.[11][12]

We are extremely excited about the upcoming Open Compute U.S. Summit 2015 on March 10-11, and apparently so is our community! Due to overwhelming response we are limiting the number of attendees and will not be accepting walk-ins or onsite registrations.

If you haven't already done so, please register ASAP: https://www.eventbrite.com/e/open-compute-us-summit-2015-march-10-11-details-below-registration-12528804993

If you have already registered for this event and can no longer attend, please cancel your ticket so that others have a chance to register. We have removed all duplicate registrations. If you are unsure whether you are registered, please email us at events@opencompute.org.

As always, you are welcome to watch via Livestream; details will be available on our website prior to the event.



Wednesday, February 25, 2015

Deutsche Bank signs 10-year deal to re-engineer wholesale banking IT



Deutsche Bank has signed a 10-year deal with HP to re-engineer the IT that underpins its wholesale banking arm in preparation for the next phase of its digital transformation.

The multi-billion euro deal will see the German banking giant use cloud platform HP Helion, modernising the IT that supports the bank's applications.

It forms the next part of Deutsche Bank's digital transformation. HP will provide datacentre services on-demand, including storage platforms as a service and hosting. The deal with HP will largely replace work previously carried out by in-house teams, with a small number of bank staff moving to the supplier.

The bank wants to re-engineer its underlying technology platform globally and standardise its IT foundations to support modern technologies such as automation. Once this is achieved, the infrastructure, which will harness mid-range systems, will support the introduction of digital services in the back office and for customers.

Deutsche Bank will retain control of IT architecture, application development and IT security.

Henry Ritchotte, COO at Deutsche Bank, said the agreement will enable the bank to standardise IT and reduce costs. 

"Having a more modern and agile technology platform will further improve the bank's ability to launch new products and services and lay the foundation for the next phase of its digital strategy," he said.

As part of the deal, the bank will use a customised version of HP's enterprise cloud platform, Helion, according to HP CEO Meg Whitman.

Deutsche Bank is investing in future technologies. It recently appointed its first chief data officer as part of its plan to introduce digital practices. JP Rangaswami joined from software-as-a-service giant Salesforce.com, where he had been chief scientist since 2010. Prior to that, Rangaswami had a five-year spell at BT and before that was CIO at investment bank Dresdner Kleinwort Wasserstein.

The bank is aware of threats to its business from companies such as Apple and PayPal in the payments market. Banks in the UK increasingly consider companies such as Google, Apple and Facebook as their biggest competitive threat. This trend is seeing banks look for partnerships in the IT industry, including joint ventures and investments in startups.

Deutsche Bank set up a joint innovation venture with IBM, Microsoft and Indian IT services firm HCL Technologies last year to improve its digital credentials.


Microsoft Virtual Academy Hybrid Cloud Courses



                                                                    Click on Photo

Extend your datacenter to the cloud to provide organizations simpler management and greater flexibility. Hybrid cloud is
no longer just a "valid alternative"– it is now the differentiating factor for businesses that want to be competitive.
Get started through in-depth, technical resources for IT Pros to explore, learn and try required and deep dive into networking,
storage and disaster recovery scenarios.

Courses
Sort by: Most Recent
40
 
Points
Level
 
200
Rating:
(2)
Add to My Learning Plan
39
 
Points
Level
 
200
Rating:
(9)
Add to My Learning Plan
52
 
Points
Level
 
300
Rating:
(1)
Add to My Learning Plan