Open Source And Security Services

Open Source And Security Services

Mellanox Announces World’s Most Scalable Switch Platforms Based on HDR 200G InfiniBand Quantum Switch Technology

Mellanox Announces World’s Most Scalable Switch Platforms Based on HDR 200G InfiniBand Quantum Switch Technology

With Up to 1600-ports in a Single Platform, Quantum Switch Systems Enable Highest Performance while Reducing Data Center Network Expenses by 4X
SUPERCOMPUTING 2017 – Nov. 9, 2017 –– Mellanox Technologies, Ltd. (NASDAQ: MLNX), a leading supplier of high-performance, end-to-end smart interconnect solutions for data center servers and storage systems, today announced the world-leading and most scalable switch platform family based on HDR 200G InfiniBand Quantum™ switch technology. The new family includes:

Quantum QM8700 – Top-Of-Rack 40-port 200Gb/s or 80-port 100Gb/s switch platform,
Quantum CS8510 – modular 200-port 200Gb/s or 400-port 100Gb/s switch platform, and
Quantum CS8500 – modular 800-port 200Gb/s or 1600-port 100Gb/s switch platform.

The higher switch density of the new platforms will enable Mellanox customers and users to optimize their use of space and power, reducing data center expenses by 4X or more while increasing their performance by 2X. For departmental-scale implementations, a single Quantum QM8700 switch connects 80 servers reflecting 1.7 times higher than competitive products. For enterprise-scale, a 2-layer Quantum switch topology connects 3200 servers which is 2.8 times higher than competitive products. For hyperscale, a 3-layer Quantum switch topology connects 128,000 servers, or 4.6 times higher than competitive products.

These data center scaling advantages enable high-performance computing, deep learning, cloud, storage and other infrastructures to reduce their network equipment cost by 4X, their electricity expense by 2X, and improve their data transfer time by 2X.

“The HDR 200G Quantum switch platforms will enable the highest scalability, all the while dramatically reducing the data center capital and operational expenses,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “Quantum will enable the next generation of high-performance computing, deep learning, Big Data, cloud and storage platforms to deliver the highest performance and setting a clear path to Exascale computing.”

“Bright Computing has worked with Mellanox for many years, and our engineering collaboration enables quick and efficient HDR and HDR100 deployments with our Deep Learning offerings from our leading OEM partners,” said Martijn de Vries, CTO, Bright Computing. “We’re also extending our collaboration into the private cloud space, working with Mellanox to deliver InfiniBand-class network performance with Bright OpenStack.”

“In speaking with customers around the globe, we continue to hear of the challenges they face in applying the power of high performance computing technology to new and emerging areas including data analytics and deep learning,” said Kash Shaikh, vice president of product management and marketing, Hybrid Cloud & Ready Solutions Group, Dell EMC. “Dell EMC and Mellanox have a long-standing relationship and we will continue to work together, including future support of HDR, to help our customers deploy the right solutions to help them navigate these emerging technologies.”

“Bandwidth and latency are two key variables to application performance in a High Performance Computing data center,” said Bill Mannel, vice president and general manager of HPC and AI at HPE. “Similar to our solutions with EDR, HPE and Mellanox engineers are developing embedded HDR fabric options for purpose-built HPC platforms, HPE Apollo 6000 Gen10 and HPE SGI 8600 systems, providing our customers more bandwidth and reduced latency using the new Mellanox Quantum powered HDR switch technology.”

“The explosive growth in data and customer requirements on data analysis in real time demands faster network speeds as well as more scalable computing and storage infrastructure,” said Qiu Long, president, IT Server Product Line, Huawei. “We are pleased to work with Mellanox to offer the leading HPC solutions with Mellanox HDR InfiniBand innovation. Huawei continuously commits to deliver higher performance while reducing total cost of ownership for our customers.”

“Mellanox’s new HDR 200G Quantum switch platforms hold the promise of giving us new levels of scalability while helping to reduce data center costs,” said Mr. Leijun Hu, VP of Inspur Group. “We are pleased to see the introduction of Quantum which we believe will enable the next generation of high-performance applications.”

“Mellanox HDR and HDR100 InfiniBand technology empowers innovation at scale,” said Mr. Li Li, vice president, general manager of Product Sales and Marketing at New H3C Group. “Incorporating Mellanox HDR and HDR100 InfiniBand technology into our solution will enable us to achieve optimal configuration and have better control of both capital and operational expenditures.”

“As AI becomes more pervasive, massive amounts of real-time data piped to and from larger scale GPU accelerated systems is paramount,” said Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA. “The HDR InfiniBand Quantum switch technology from Mellanox provides the bandwidth, scalability and flexibility needed to deliver new levels of performance and efficiency for the next generation of deep learning accelerated computing systems.”

“Mellanox Quantum more than doubles the number of compute nodes per InfiniBand leaf switch, which supports the industry-leading physical density of the Penguin Tundra ES platform,” said Jussi Kukkonen, vice president, Advanced Solutions, Penguin Computing, Inc. “In early 2018, Penguin will bring to market the first systems featuring a true PCI-Express generation 4 I/O subsystem, unlocking the full 200Gbps performance potential of the Mellanox Quantum and InfiniBand HDR.”

“With Mellanox’s new HDR 200G Quantum switch platforms, we now see a clear path to Exascale,” said Mr. Chaoqun Sha, senior vice president of technology at Sugon. “We are excited to see this solution come to market with its data center scaling advantages and improved data transfer rates.”

SOBRE LA VULNERABILIDAD KRACK WPA2

SOBRE LA VULNERABILIDAD KRACK WPA2

SOBRE LA VULNERABILIDAD KRACK WPA2

Actualmente WPA2 es el protocolo que aporta una mayor seguridad en redes WiFi y recientemente se hizo pública información referente a ciertas debilidades en este protocolo.

Estas debilidades se han bautizado con el nombre de KRACK (key reinstallation attacks).

El alcance de dichas debilidades depende de la propia configuración de la red WiFi y de la implementación de los distintos fabricantes. Prácticamente todo dispositivo WiFi que implemente de manera correcta el protocolo está afectado.

CONSECUENCIAS DE KRACK

El alcance de esta vulnerabilidad puede permitir:

  • Descifrar tráfico de la red WiFi (permitiendo secuestrar conexiones TCP, capturar información sensible si los protocolos de transporte no usan cifrado, por ejemplo, HTTP en lugar de HTTPS, etc.)
  • Reproducir tramas de broadcast/multicast
  • Inyectar tráfico en la red WiFi (sólo TKIP o GCMP)
  • Forzar a utilizar una clave de cifrado predecible (sólo Android 6.0+ y Linux) 

LO BUENO

  • Los fabricantes deberían asumir responsabilidad del problema y corregirlo mediante las actualizaciones pertinentes

LO MALO

  • Muchos dispositivos no dispondrán de una forma sencilla de aplicar actualizaciones
  • Mucha variedad de dispositivos, puede no ser sencillo encontrar actualización para todos los dispositivos
  • El ataque afecta a puntos de acceso y clientes, es importante actualizar ambas partes. Actualizar solo una de las partes no evita el problema

LO MÁS MALO

  • Ataques a plataformas Android 6.0+ son muy fáciles de realizar
  • Dispositivos IoT pueden no recibir actualización nunca

RECOMENDACIONES

  • Aplicar actualizaciones en el firmware de los puntos de acceso que solucionen el problema
  • Utilizar únicamente como protocolo/cifrado WPA2/AES-CCMP, lo cual minimiza el impacto del problema
  • Aislar las redes WiFi y evitar el uso de protocolos en claro en dichas redes, al menos, para acceso a recursos internos de la compañía. Generalmente se recomienda que los usuarios que trabajen por WiFi, lo hagan mediante VPN
  • Aplicar actualizaciones en los sistemas operativos (especialmente importante en plataformas Linux y Android 6.0+)
  • En dispositivos Android 6.0+, deshabilitar la WiFi hasta disponer de un parche que solucione el problema

LO QUE NO SE PUEDE HACER

  • Obtener la clave WPA
  • Inyectar paquetes (si se utiliza AES-CCMP)

Fuente: https://github.com/kristate/krackinfo

Para más información puedes contactar con nosotros en info@open3s.com.

Arquitectura híbrida con Nutanix y Google Cloud + Kubernetes

Hace poco Kubernetes anunció un strategic partnership con Nutanix para ayudar a eliminar las problemas en los despliegues de cloud híbridos para empresas:

https://www.blog.google/topics/google-cloud/nutanix-and-google-cloud-team-simplify-hybrid-cloud/

Los cloud híbridos, entendiéndose estos como la suma de un despliegue on-premise y en cloud público (como Google Cloud), permiten a las organizaciones correr aplicaciones en ambos y aprovecharse de sus ventajas:

— Incrementar la velocidad de puesta en producción de nuevos productos y funcionalidades,

— Incrementar los recursos para dar servicio a las demandas de los clientes,

— Mover aplicaciones al cloud público al ritmo que ellos necesiten,

— Reducir el tiempo que se dedica a infraestructura, e incrementar el tiempo que se dedica a escribir código,

— Reducir costes al mejorar la utilización de los recursos y la eficiencia de computación. leer más…

UDS Enterprise fortalece su alianza con Nutanix

 

El Equipo de Alianzas de Nutanix ha reconocido el compromiso de VirtualCable por alcanzar la excelencia en el desarrollo de una solución VDI conjunta formada por el hipervisor Acropolis de Nutanix y nuestro broker de conexiones UDS Enterprise

UDS enterprise a recibido la distinción Nutanix Ready AHV-Integrated technology validation, que se suma a la certificación Nutanix Ready para Virtualización de Escritorios otorgada hace dos años.

El fabricante de soluciones hiperconvergentes ha creado además un microsite dentro de su página web centrado en exclusiva en las ventajas y el óptimo funcionamiento de una plataforma de escritorios virtuales con UDS Enterprise y Nutanix Acropolis

Novedades sobre AHV Networking

Nutanix

¿Se tarda demasiado en la implementación de aplicaciones?

Normalmente así es, en vez de horas lleva semanas. Y es que los procesos manuales y el intercambio continuo de mensajes entre el equipo de aplicación y el equipo de red suelen alargar de horas a semanas el tiempo que se tarda en desplegar una aplicación.

Si la red se acaba pareciendo cada vez más a una caja negra, o si la gestión de los proyectos de aplicación parece alargarse en el tiempo sin final, el conjunto de herramientas de red disponible en Nutanix AHV ayuda a simplificar las operaciones de red y poner los proyectos en funcionamiento de forma rápida y segura.

Hoy en día los programas dependen más que nunca de la red. Las best-practices del diseño de aplicaciones recomiendan separar las funciones más importantes utilizando múltiples contenedores, máquinas virtuales o hosts para proporcionar escalabilidad y un fácil mantenimiento, y después toca conectar cada una de las partes a la red. Independientemente de la plataforma de despliegue que se utilice, la red es parte de la aplicación.

Cuando hay un problema en la red, la aplicación es el primer lugar donde aparece el problema. Cuando hay que hacer crecer la aplicación o agregar un nuevo servicio, se debe configurar la red física y virtual para que las nuevas piezas puedan hablar con los componentes existentes. Cuando un atacante consigue un punto de entrada en una parte de la aplicación, puede utilizar esta misma red para explorar todas las piezas conectadas a la aplicación.

Entonces, para poder manejar toda esta complejidad que hay en despliegue y gestión de la red de las aplicaciones, hay tres cosas esenciales:

  • Visualización
  • Automatización
  • Seguridad

En esta serie de posts analizamos y nos centramos en estos requisitos de red, destacando las características de AHV anunciadas en Nutanix .NEXT que te ayudan a montar una red en un solo clic.

  • Parte 1: Visualización de la red AHV (este post)
  • Parte 2: Automatización e integración de red de AHV
  • Parte 3: Microsegmentación de la red AHV
  • Parte 4: Cadenas de la red de la red AHV

leer más…

Why iSER is the right high speed Ethernet all-flash interconnect today

The following is a guest blog post from Subjojit Roy, a Senior Technical Staff Member working out of IBM India Labs. 

All-flash storage is bringing change throughout the data center to meet the demands of modern workloads. Fiber Channel has traditionally been the preferred interconnect for all-flash storage. However, 21st century data center paradigms like cloud, analytics, software defined storage, etc. are driving a definitive shift towards Ethernet infrastructure that includes Ethernet connectivity for both server and storage. As Ethernet speeds rapidly increase to 25/40/50/100Gb, it becomes more and more lucrative as an interconnect to all-flash storage. While traditional iSCSI has gained significant ground as Ethernet interconnect to storage, inefficiencies in the TCP/IP stack don’t allow it to be the preferred interconnect to all flash storage.

In comes iSER (iSCSI Extensions over RDMA) that maps the iSCSI protocol to RDMA (Remote Direct Memory Access). iSER provides an interconnect that is very capable of rivaling Fiber Channel as the all-flash interconnect of choice. It leaves the administrative framework of iSCSI untouched while mapping the data path over RDMA. As a result, management applications like VMWare vCenter, OpenStack, etc. continue to work as is, while the iSCSI data path gets a speed boost from Remote Direct Memory Access. A move from traditional iSCSI to iSER would thus be a painless affair that doesn’t require any new administrative skills.

iSER retains all the enterprise class capabilities that are expected off Tier 1 shared storage. It also matches or beats Fiber Channel in terms of access latency, bandwidth and IOPS. Capabilities like multipath IO, SCSI Reservations, Compare and Write, vVols support, and offloaded data copy operations like XCOPY/ODX will work from day one on iSER. In addition, iSER benefits from all the SCSI error recovery techniques that have evolved over the years – things like LUN Reset, Target Reset, Abort Task, etc. In essence, all enterprise class applications will continue to work as reliably and seamlessly over iSER as they used to work over iSCSI.

The diagram below shows how iSCSI is involved in the iSER IO path only for the Command and Statusphases while the Data Transfer phase is totally taken care of by RDMA transfers directly into application buffers without involving a copy operation. This compares well with NVMeF in terms of latency reduction.

 

NVMe over Fabrics or NVMeF is a new protocol that promises to take all-flash interconnect technology to the promised land of extreme performance and parallelism and there are a lot of expectations from it. It is a protocol that is still evolving, and therefore not mature enough to meet the requirements of clustered applications running over shared Tier 1 all-flash storage. And it is a quantum jump that not only expects the user to move to high speed Ethernet technology from Fiber Channel but a totally new protocol with a new, unfamiliar administrative model. It is likely that NVMeF will take some time to mature as a protocol before it can be accepted in data centers requiring Tier 1 shared all-flash storage. In addition to that applications must adapt to a new queuing model to exploit the parallelism offered by flash storage.

That leaves iSER as the right technology to bridge the gap and step in as the preferred interconnect for shared all-flash storage today. iSER is ready from day one for latency, IOPS and bandwidth hungry applications that want to exploit high speed Ethernet technology, both as a north-south and east-west interconnect. IO parallelism may not be as high as promised by NVMeF, but it’s sufficient for all practical purposes without requiring applications to be rewritten to fit into a new paradigm.

By implementing iSER today, the move from Fiber Channel to high speed Ethernet can be tried out without ripping out the entire administrative framework or the need to rewrite applications. A gradual move from Fiber Channel to RDMA over Ethernet replaces the layer 2 transport protocol and helps assess the newer protocol in terms of its stability, resiliency and error recovery capabilities that are essential for a SAN storage interconnect. Once proven, the same RDMA technology can then be leveraged to bring in NVMeF which promises more in the future. Since iSER and NVMeF will work equally well on the same hardware, the infrastructure investment made in iSER is protected for the long term.

At IBM we are working toward enabling our customers to move to data center infrastructure that consists purely of Ethernet interconnects with speeds scaling rapidly from 10 – 100Gbps. Built over iSER, this capability is all-flash storage ready from day one. Agnostic of the underlying RDMA capable networking, it is likely to be very attractive to software defined storage infrastructure that is expected to be built from commodity hardware. It enables IBM Spectrum Virtualize products (IBM Storwize and IBM SVC) to be deployed on cloud infrastructure where Ethernet is the only available infrastructure. And in order to get there, we have partnered with multiple hardware and software vendors that are at the forefront of the high speed Ethernet revolution.

So get ready to experience all-flash storage connected over high speed Ethernet from IBM sometime in the near future!

Subhojit is Senior Technical Staff Member working out of IBM India Labs, Pune. He works as development architect for the IBM Spectrum Virtualize product. He has worked for 23 years in Data Storage, Storage Virtualization, Storage Networking etc. across organizations like IBM, Veritas, Brocade & Symantec etc. At IBM he has been driving Ethernet & IP Storage architecture and roadmap for the IBM Spectrum Virtualize products. Currently he is working on high speed Ethernet interconnect for all flash storage including iSER and NVMeF. Prior to IBM he has been responsible for key features for Enterprise Storage products in his earlier organizations. He is Master Inventor and Member Academy of Technology at IBM. He owns significant Intellectual Property in the form of patents and has more than 50 granted and filed patent applications. He can be found on Twitter @sroy_sroy and on LinkedIn at https://www.linkedin.com/in/roysubhojit/.

dinamicseo