Quantcast
Channel: The Linux Foundation – WWPI – Covering the best in IT since 1980
Viewing all 11 articles
Browse latest View live

Nexenta upholds open source software-defined storage platform for containers, cloud native apps

$
0
0

Nexenta launched its inclusive plan and strategy for bringing enterprise and cloud grade storage and data management to container-based cloud-native application deployments. The open source-driven software-defined storage (OpenSDS) vendor blends container technology with its scale-out NexentaEdge software-defined storage solution, embracing microservice architectures to provide a high performance, enterprise-grade storage foundation for cloud-native applications.

As part of this initiative, Nexenta is joining the Open Container Initiative (OCI) and Cloud Native Computing Foundation (CNCF) which already includes vendors such as AT&T, Amazon Web Services, Cisco, Docker, EMC, Fujitsu, Google, Hewlett-Packard, IBM, Intel, Joyent, The Linux Foundation, Mesosphere, Microsoft, Nutanix, Oracle, Red Hat, Suse, Sysdig, Twitter, Verizon and VMware. It also merges container technology into NexentaEdge, enabling NexentaEdge nodes to be deployed as microservices on any scale-out cluster of Linux servers, providing high performance block and object storage services with enterprise grade functionality (inline deduplication, inline compression, unlimited snapshots and clones.

NexentaEdge is designed from the ground-up to deliver high performance block and object storage services and limitless scalability to next generation OpenStack clouds, petabyte scale active archives and big data applications. NexentaEdge runs on shared nothing clusters of Linux servers and builds on Nexenta IP and patent pending Cloud Copy On Write (CCOW) technology to break new grounds in terms of reliability, functionality and cost efficiencies.

The initiative also allows low latency data access between application microservices and NexentaEdge storage microservices running concurrently on the same server infrastructure, eliminating the overhead associated with traditional iSCSI block access methods and delivering true container-converged infrastructure. It also delivers unlimited container mobility across the entire cluster by leveraging NexentaEdge scale out architecture for any time, any server access to any container image or any application backend data; ensures integration and management of NexentaEdge storage microservices with Kubernetes; building ClusterHQ Flocker volume plug-ins for NexentaEdge and NexentaStor to support customers who prefer to keep compute and storage running on separate physical infrastructure, apart from being compatible with Canonical’s Juju, Charms and LXD.

The agility, simplicity and efficiency of microservices and container-based architecture have established them as the de facto standard for developers building cloud applications at scale. This is particularly true for stateless applications that require little or no persistent storage capability from the infrastructure.

As enterprises look for ways to bring these same agility, simplicity and efficiency benefits to stateful applications, the need for persistent storage solutions that integrate with and support container deployments has developed and come to the forefront.

Nexenta is actively working with its partners to address these emerging requirements with its open software-defined storage solutions, enabling customers to simply deploy cloud-native applications whether they require persistent storage or not. In the process, Nexenta is enabling true software-defined container-converged infrastructure: scale-out server based infrastructure that concurrently runs NexentaEdge storage microservices and application microservices, providing high performance persistent storage services and simple container mobility.

The post Nexenta upholds open source software-defined storage platform for containers, cloud native apps appeared first on Computer Technology Review.


Linux Foundation launches Open API Initiative to extend Swagger specification

$
0
0

The Linux Foundation announced Thursday its Open API Initiative to recognize the value of standardizing on how REST APIs are described. As an open governance structure under the Linux Foundation, the OAI is focused on creating, evolving and promoting a vendor neutral description format.

SmartBear Software is donating the Swagger Specification directly to the OAI as the basis of this Open Specification. APIs form the connecting glue between modern applications. Nearly every application uses APIs to connect with corporate data sources, third party data services or other applications.

Creating an open description format for API services that is vendor neutral, portable and open is critical to accelerating the vision of a truly connected world.

The founding members of the Open API Initiative include 3Scale, Apigee, Capital One, Google, IBM, Intuit, Microsoft, PayPal, Restlet and SmartBear.

The Initiative will extend the Swagger specification and format to create an open technical community within which members can easily contribute to building a vendor neutral, portable and open specification for providing metadata for RESTful APIs. This open specification will allow both humans and computers to discover and understand the capabilities of the respective services with a minimal amount of implementation logic.

The Initiative will also promote and facilitate the adoption and use of an open API standard. The open governance model for the Open API Initiative includes a Technical Developer Committee (TDC) that will maintain and evolve the specification, as well as engage users for feedback to inform development.

Swagger was created in 2010 and offered under an open source license a year later. It is a description format used by developers in industries ranging from consumer electronics to energy, finance, healthcare, government, media and travel to design and deliver APIs that support a range of connected applications and services.

With downloads of Swagger and Swagger tooling nearly tripling over the last year, it is considered the most popular open source framework for defining and creating RESTful APIs. SmartBear recently acquired the Swagger API open source project from Reverb Technologies, and is working with its industry peers to ensure the specification and format can be advanced for years to come.

The goal of the OAI specification is to define a standard, language-agnostic interface to REST APIs which allows both humans and computers to discover and understand the capabilities of the service without access to source code, documentation or through network traffic inspection.

When properly defined, a consumer can understand and interact with the remote service with a minimal amount of implementation logic. Similar to what interfaces have done for lower-level programming, Swagger removes the guesswork in calling the service.

In organizing the OAI, SmartBear is donating the Swagger Specification to the foundation. Independently, SmartBear continues to invest heavily to foster the Swagger community, ecosystem and tooling built on top of the Swagger Specification. With over 350,000 downloads per month of Swagger and Swagger tooling, the Swagger Specification is the world’s most popular description format for defining Restful APIs.

The post Linux Foundation launches Open API Initiative to extend Swagger specification appeared first on Computer Technology Review.

Linux Foundation debuts open ledger project to transfigure business transactions

$
0
0

The Linux Foundation has unveiled a collaborative effort to advance blockchain technology, with the intention of developing an enterprise grade, open source distributed ledger framework and free developers to focus on building “robust, industry-specific applications, platforms and hardware systems to support business transactions.”

 

Blockchain is a digital technology for recording and verifying transactions. The distributed ledger is a permanent, secure tool that makes it easier to create cost-efficient business networks without requiring a centralized point of control.

 

With distributed ledgers, virtually anything of value can be tracked and traded. The application of this emerging technology is showing great promise in the enterprise.

 

For example, it allows securities to be settled in minutes instead of days. It can be used to help companies manage the flow of goods and related payments or enable manufacturers to share production logs with OEMs and regulators to reduce product recalls.

 

Early commitments to this work come from Accenture, ANZ Bank, Cisco, CLS, Credits, Deutsche Börse, Digital Asset Holdings, DTCC, Fujitsu Limited, IC3, IBM, Intel, J.P. Morgan, London Stock Exchange Group, Mitsubishi UFJ Financial Group (MUFG), R3, State Street, SWIFT, VMware and Wells Fargo.

 

Distributed ledger systems are being built across industries, but to realize the promise of this emerging technology, an open source and collaborative development strategy that supports multiple players in multiple industries is required. This development can enable the adoption of blockchain technology at a pace and depth not achievable by any one company or industry.

 

This type of shared or external Research & Development (R&D) has proven to deliver billions in economic value. The collaboration is expected to help identify and address important features and currently missing requirements for a cross-industry open standard for distributed ledgers that can transform the way business transactions are conducted globally.

 

Several founding members have already invested in considerable research and development efforts exploring blockchain applications for industry. IBM intends to contribute tens of thousands of lines of its existing codebase and its corresponding intellectual property to this open source community. Digital Asset is contributing the Hyperledger mark, which will be used as the project name, as well as enterprise grade code and developer resources.

 

R3 is contributing a new financial transaction architectural framework designed to specifically meet the requirements of its global bank members and other financial institutions. These technical contributions, among others from a variety of companies, will be reviewed in detail in the weeks ahead by the formation and Technical Steering Committees.

 

Last week, the Cloud Native Computing Foundation, a Linux Foundation Collaborative Project and organization dedicated to advancing the development of cloud native applications and services, announced new members from across the industry, its formal open governance structure and new details about its technology stack.

 

The intent to form the Cloud Native Computing Foundation (CNCF) was announced earlier this year at OSCON to support development in a cloud native environment. Cloud native applications are container-packaged, dynamically scheduled and microservices-oriented.

 

The foundation focuses on development of open source technologies, reference architectures and common format for Cloud native applications or services. This work provides the necessary infrastructure for Internet companies and enterprises to scale their businesses.

 

This work is resource intensive, requiring companies to assemble a team of experts that can integrate disparate technologies and maintain all of them. The foundation also seeks to improve overall developer experience, paving the way for faster code reuse, improved machine efficiency, reduced costs and increases in the overall agility and maintainability of applications.

 

The post Linux Foundation debuts open ledger project to transfigure business transactions appeared first on Computer Technology Review.

The Linux Foundation builds FD.io, its open source project to establish an IO services framework

$
0
0

The Linux Foundation announced Thursday FD.io (“Fido”), an open source Linux Foundation project aimed at establishing a high-performance IO services framework for computing environments such as network and storage software. The project is also announcing the availability of its initial software and formation of a validation testing lab.

Early support for FD.io comes from founding members 6WIND, Brocade, Cavium, Cisco, Comcast, Ericsson, Huawei, Inocybe Technologies, Intel, Mesosphere, Metaswitch Networks (Project Calico), PLUMgrid and Red Hat.

Designed as a collection of sub-projects, FD.io provides a modular, extensible user space IO services framework that supports rapid development of high-throughput, low-latency and resource-efficient IO services. The design of FD.io is hardware, kernel, and deployment (bare metal, VM, container) agnostic.

Initial code contributions for FD.io include Vector Packet Processing (VPP), technology being donated by one of the project’s founding members, Cisco. The initial release of FD.io is fully functional and available for download, providing an out-of-the-box vSwitch/vRouter utilizing the Data Plane Development Kit (DPDK) for high-performance, hardware-independent I/O.

The initial release will also include a full build, tooling, debug, and development environment and an OpenDaylight management agent. FD.io will also include a Honeycomb agent to expose netconf/yang models of data plane functionality to simplify integration with OpenDaylight and other SDN technologies.

Future contributions from the open source community and FD.io members are expected to extend FD.io capabilities in areas such as firewall, load balancing, LISP, host stack, IDS, hardware accelerator integration, additional SDN protocol support via additional management agents, and other critical IO services for network and storage traffic.

VPP is production code currently running in products available on the market today. VPP runs in user space on multiple architectures, including x86, ARM and Power, and is deployed on various platforms including servers and embedded devices. VPP is two orders of magnitude faster than currently available open source options, reaffirming one of the core principles of FD.io, a focus on performance. Prior to the formation of FD.io, an independent test lab conducted a performance evaluation on VPP.

FD.io also formed its Continuous Performance Lab (CPL) that provides an open source, fully automated testing infrastructure framework for continuous verification of code functionality and performance. Code breakage and performance degradation is flagged before patch review, conserving project resources and increasing code quality.

The CPL allows FD.io to guarantee performance, scalability, and stability for each release. The physical hardware needed to run the performance testing will be hosted at FD.io, with donations of a diverse set of hardware from many vendors.

Just as open source efforts such as the OpenDaylight Project (ODL), Open Platform for NFV (OPNFV) and Open Network Operating System (ONOS) have formed to advance orchestration and network controller capabilities, FD.io will foster similar innovation in the critical, and, as yet, unaddressed area of IO services.

FD.io will help advance the state of the art of network and storage infrastructure and will become a “must have” technology in next-gen service provider and enterprise data center strategies as its benefits to areas like SDN and NFV are realized.

Members acknowledge and agree that all new inbound code contributions to the fd.io Project by a member shall be made under the Apache License, Version 2.0. All contributions shall be accompanied by a Developer Certificate of Origin (DCO) sign-off submitted through a Board of

Directors approved contribution process. Such contribution process will include steps to also bind non-

member contributors and, if not self-employed, their employer, to the licenses expressly

granted in the Apache License, Version 2.0 with respect to such contribution.

Contributions will be accompanied by license and copyright attribution information for each file where possible to include such information in the file; any additional license compliance information required to be provided in conjunction with outbound distribution of a contribution; and information sufficient to

provide notice of license terms for all additional third party copyrightable components (software, graphics, text, etc.) introduced as dependencies to the submitted source code.

Information regarding contributors and DCOs will be captured and preserved at the time of contribution. Specific processes for review of inbound contributions, and any changes or updates to such process (the “Review Process”), shall be approved by the Board of Directors and the oversight and implementation of the Review Process shall be the responsibility of the legal committee of the Board of Directors.

The post The Linux Foundation builds FD.io, its open source project to establish an IO services framework appeared first on Computer Technology Review.

OpenHPC debuts initial software stack to include academic universities, government labs, hardware vendors

$
0
0

The Linux Foundation announced this week technical, leadership and member investment milestones for OpenHPC, a Linux Foundation project to develop an open source framework for High Performance Computing (HPC) environments.

While HPC is often thought of as a hardware-dominant industry, the software requirements needed to accommodate supercomputing deployments and large-scale modeling requirements is increasingly more demanding.

An open source framework like OpenHPC promises to close technology gaps that hardware enhancements alone can’t address. As open source software has proven its ability to reliably test and maintain operating conditions, it is becoming the de facto software choice for complex environments – meteorology, astronomy, engineering and nuclear physics, and big data science.

OpenHPC is a collaborative, community effort that initiated from a desire to aggregate a number of common ingredients required to deploy and manage High Performance Computing (HPC) Linux clusters including provisioning tools, resource management, I/O clients, development tools, and a variety of scientific libraries. Packages provided by OpenHPC have been pre-built with HPC integration in mind with a goal to provide re-usable building blocks for the HPC community.

Over time, the community also plans to identify and develop abstraction interfaces between key components to further enhance modularity and interchangeability.

The community includes representation from a variety of sources including software vendors, equipment manufacturers, research institutions and supercomputing sites. This community works to integrate a multitude of components that are commonly used in HPC systems, and are freely available for open source distribution.

“The OpenHPC community has quickly paved a path of collaborative development that is highly inclusive of stakeholders invested in HPC-optimized software,” said Jim Zemlin, executive director, The Linux Foundation. “To see OpenHPC members include the world’s leading computing labs, universities, and hardware experts, illustrates how open source unites the world’s leading technologists to share technology investments that will shape the next 30+ years of computing.”

The following organizations have shown their support for the OpenHPC open source framework as founding members of the project, including Altair, Argonne National Laboratory, ARM, Atos, Avtech Scientific, Barcelona Supercomputing Center, CEA, Center for Research in Extreme Scale Technologies (Indiana University), Cineca Consorzio Interuniversitario, Cray, Dell, Fujitsu, Hewlett Packard Enterprise, Intel, Lawrence Berkeley National Laboratory (LBNL), Lawrence Livermore National Laboratory (LLNL), Leibniz Supercomputing Centre (LRZ), Lenovo, Los Alamos National Security (LANS), ParTec Cluster Computing Center, the Pittsburgh Supercomputing Center, RIKEN, Sandia National Laboratories (SNL), SGI, SUSE and Univa.

OpenHPC aims to offer a mid-stream building block open source code repository that integrates and tests third-party software available as a distribution. Users can then customize HPC solutions by choosing components based on environment needs. The latest software release, OpenHPC 1.1, is now available for download. This initial software stack includes over 60 packages, including tools and libraries, as well as provisioning and a job scheduler.

Committed to open and transparent collaborative development that is inclusive of cross-industry technical needs, OpenHPC Technical Steering Committee (TSC) and Governing Board members span academic, government labs and hardware organizations. The TSC will oversee technical direction and code contributions for the project while the governing board is responsible for operational efficiency, budgetary oversight, establishing IP policies, and marketing.

The Linux Foundation launches big data Platform for Network Data Analytics

$
0
0

The Linux Foundation announced Tuesday that Platform for Network Data Analytics (PNDA) is now a Linux Foundation Project. PNDA provides an open source, scalable platform for next-generation network analytics. The project has also announced the availability of its initial platform release, with early supporters such as Cisco, Deepfield, FRINX, Intersec, Moogsoft, NGENA, Ontology, OpenDataSoft and Tupl.

PNDA aims to eliminate complexity by integrating, scaling and managing a set of open data processing technologies and provides an end-to-end platform for deploying analytics applications and services. The design of PNDA is based on next-generation, big data architecture patterns. It supports batch and real-time streaming data exploration and analysis, at the scale of millions of messages per second.

Cisco is contributing code to enable end-to-end platform provisioning and management, application packaging, and deployment. The initial release of PNDA is fully functional and available for download as a production-ready solution on OpenStack-based platforms. Support for bare-metal and public-cloud provisioning is expected later this year.

Future contributions from the open source community are expected to extend and innovate upon PNDA’s capabilities, including Hadoop distribution independence, platform infrastructure validation, container support, additional data publishers, and deep-learning framework integration, among others.

PNDA also complements major open source software defined networking, network functions virtualization, and network orchestration efforts such as OpenDaylight, Open Platform for NFV (OPNFV), and FD.io. There is also synergy with the Open Data Platform initiative (ODPi), which defines a common runtime specification, reference implementation and test suite for Hadoop-based distributions, including PNDA.

“There have been significant efforts across the industry in NFV, automation, orchestration and control which have made real-time network service provisioning possible,” said David Ward, SVP, Chief Architect & CTO, Cisco. “Open source software implementations supporting this space are also maturing as shown by OpenStack and OpenDaylight. In comparison, industry efforts to enable the monitoring and analysis of the data produced by these services have been lagging behind.”

‘We believe that the solution is to leverage the rapid innovation in big data analytics; that’s the reason for being for PNDA — an open source big data platform that can foster an ecosystem of innovative analytics applications while also supporting the next generation of reactive network services,” Ward added.

“At Deepfield, we have experienced a rapid adoption of our big data analytics platform to replace siloed solutions such as traffic engineering, service assurance, network forensics, and DDOS,” said Jeff Bazar, Chief Strategy Officer, Deepfield. “Our customers have recognized that big data analytics is not just for replacing legacy solutions, but also it is required to enable next generation OSS/BSS, orchestration, control, automation and NFV deployments. Recognizing the importance of this technology, Cisco has made a valuable open source contribution, namely PNDA, to help build the ecosystem and accelerate the development of new big data applications.”

“PNDA combines big data architecture, tools and techniques to deliver network information at virtually unlimited scale to operators and customers alike,” said Tomas Olvecky, Technical Leader, FRINX. “FRINX believes PNDA is the logical next step that allows network operators to close the loop between acquiring data from the network, running analytics to mine high value information and finally feeding policy back into the network to optimize for customer experience and cost.”

“We believe PNDA will boost the ecosystem for network analytics, creating the conditions for open platforms on which external applications will be higher performance and easier to deploy,” said Jean-Marc Coïc, CTO & Co-Founder, Intersec. “Thanks to its efficient API, PNDA opens the way for analytics applications based on location data, within an NFV architecture. Intersec’s extensive experience in big data analytics induces us to promote this type of initiative, and we’re glad to join this project and provide full compatibility with the framework.”

“Almost every company we talk to, whether service provider or digital enterprise, has a strategy and plan to use open source toolkits for aggregation, data warehousing, and big data analytics. All of these companies are duplicating the same effort, experiencing and repeating the same learning processes, and delivering similar resulting solutions,” said Mike Silvey, Executive Vice President, Moogsoft. “The PNDA initiative offers a packaged approach, using the best of breed open source technologies that everyone is already committed to, helping short circuit the process to value and reducing the resource effort involved in architecture, implementation and support. For Moogsoft and our peers in the service assurance community, PNDA offers a single point of data that we can subscribe to, reducing our time to demonstrate value and reducing our need to instrument custom integrations. PNDA is a win-win for both vendor and end-user communities.”

DPDK Project now part of The Linux Foundation in bid to expand open source community and drive development

$
0
0

The Linux Foundation announced Monday that the DPDK Project (Data Plane Development Kit) community has moved to The Linux Foundation. The Linux Foundation provides a neutral home that promotes collaboration around open source technologies, such as a technical governance model that enables the growth of developer communities.

The DPDK Project includes members from the telecommunications industry, network and cloud infrastructure vendors, as well as multiple hardware vendors.

Gold members of the project are ARM, AT&T, Cavium, Intel, Mellanox, NXP, Red Hat and ZTE Corporation. Silver members of DPDK include 6WIND, Atomic Rules, Huawei, Spirent, and Wind River. Korea Advanced Institute of Science and Technology (KAIST), University of Limerick, University of Massachusetts Lowell, and Tsinghua University are associate members.

DPDK is the Data Plane Development Kit that consists of libraries to accelerate packet processing workloads running on a wide variety of CPU architectures. In a world where the network is becoming fundamental to the way people communicate, performance, throughput, and latency are increasingly important for applications like wireless core and access, wireline infrastructure, routers, load balancers, firewalls, video streaming, and VoIP.

By enabling very fast packet processing, DPDK is making it possible for the telecommunications industry to move performance-sensitive applications like the backbone for mobile networks and voice to the cloud. It was also identified as a key enabling technology for Network Functions Virtualization (NFV) in the original ETSI NFV White Paper.

DPDK was created in 2010 by Intel and made available under a permissive open source license. The open source community was established at DPDK.org in 2013 by 6WIND and has facilitated the continued expansion of the project.

Since then, the community has been continuously growing in terms of the number of contributors, patches, and contributing organizations, with 10 major releases completed including contributions from over 400 individuals from 70 different organizations. DPDK now supports all major CPU architectures and NICs from multiple vendors, which makes it ideally suited to applications that need to be portable across multiple platforms.

Over 20 open source projects build on DPDK libraries, including MoonGen, mTCP, Ostinato, Lagopus, Fast Data (FD.io), Open vSwitch, OPNFV, and OpenStack. Strengthening the ecosystem around DPDK will enable it to meet the needs of the users and projects that depend on it and helps to foster open innovation.

The Linux Foundation and the DPDK community have worked to establish a governance and membership structure for the DPDK Project to nurture a vibrant and open community, and also provide financial support to help the community. A Governing Board will guide marketing, and consider business impact and alignment with the community. The Technical Board, which is in charge of the technical direction of DPDK, is already established and consists of key contributors who lead ongoing maintenance and evolution of the project.

“We’re seeing the telecom industry become more collaborative, largely because of commitment in open source and other standards-type processes,” said Chris Rice, Senior Vice President of AT&T Labs. “The Linux Foundation has a history of aligning the open source communities, and DPDK’s transition to The Linux Foundation helps promote more open collaboration for network packet processing.”

“Cavium welcomes the move of the DPDK Project to The Linux Foundation,” said Larry Wikelius, Vice President Software Ecosystem and Solutions Group, Cavium. “In the last two years, we have expanded DPDK to support Cavium’s ARMv8 processors as well as our range of adapters and Ethernet NICs, which brings significantly more choice to builders of high performance Cloud, NFV, and premise-based networking equipment. Cavium is also driving enhancements to allow hardware schedulers/load balancers to better utilize every core in the most efficient way.”

“Intel has long appreciated the strong value that DPDK provides as a high performance packet processing building block, enabling the move to efficiently virtualize network solutions on open platforms,” said Sandra Rivera, Vice President and General Manager, Network Platforms Group at Intel. “We look forward to continuing to work with The Linux Foundation and DPDK community by contributing and innovating for optimized solutions that accelerate and scale deployments of NFV and SDN.”

“Mellanox is committed to open source development and looks forward to driving DPDK forward as part of The Linux Foundation,” said Amit Krig, Vice President of Software Engineering, Mellanox Technologies. “We have been an active participant since the project was first initiated, and will work with the expanded community to optimize DPDK to deliver both performance and efficiency for network intensive applications.”

“NXP is pleased to participate in the leadership of the new DPDK Project within The Linux Foundation,” said Richard House, Vice President of Global Software Development of NXP. “DPDK is an important technology that supports the development of open standards in networking software. NXP is delighted to work with other leading semiconductor, network equipment, and software developers on the continued development of DPDK within an open forum.”

“Open source communities continue to be a driving force behind technology innovation, and open networking and NFV are great examples of that. Red Hat believes deeply in the power of open source to help transform the telecommunications industry, enabling service providers to build next generation efficient, flexible and agile networks,” said Chris Wright, Vice President and Chief Technologist, Office of Technology at Red Hat. “DPDK has played an important role in this network transformation, and our contributions to the DPDK community are aimed at helping to continue this innovation.”

“DPDK is a key technology that enables the communications industry to move to a virtualized infrastructure. As a global leader in telecommunications and information technology, we see strong open source community support as an essential element to building high performance networking solutions for the cloud infrastructure,” said Zhang Wanchun, Vice President of ZTE and Principal of Wireless Product R&D Institute, ZTE. “We will consistently support the development of the DPDK project and collaborate with industry peers to help build and shape this technology for the future.”

The Linux Foundation announces Linux on Azure training course to speed with Linux and vice versa

$
0
0

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, announced on Thursday the availability of a new training course, LFS205 – Administering Linux on Azure.

A large number of the virtual machines running in Azure are utilizing the Linux operating system. Both Linux and Azure professionals should make sure they know how to manage Linux workloads in an Azure environment as this trend is likely to continue.

LFS205 provides an introduction to managing Linux on Azure. Whether someone is a Linux professional who wants to learn more about working on Azure, or an Azure professional that needs to understand how to work with Linux in Azure, this course will provide the requisite knowledge.

The course starts with an introduction to Linux and Azure, after which students will learn more about advanced Linux features and how they are managed in an Azure environment. Next, the course goes into information about managing containers, either in Linux or with the open source container technology that is integrated in Azure. After that, LFS205 covers how to deploy virtual machines in Azure, discussing different deployment scenarios.

Once the VMs are available in Azure, students will need to know how to manage them in an efficient way, which is covered next. The last part of this course teaches how to troubleshoot Linux in Azure, and to monitor Linux in Azure using different open source tools.

Students can expect to learn about advanced Linux features and how they are managed in an Azure environment; managing containers; deploying virtual machines in Azure, and managing them, apart from monitoring and troubleshooting Linux in Azure.

“With over 40 percent of VMs on Azure now Linux, we are working closely with The Linux Foundation on a Linux on Azure course to make sure customers currently using Linux on Azure–and those who want to–have the tools and knowledge they need to run their enterprise workloads on our cloud,” said John Gossman, Distinguished Engineer, Microsoft Azure, and Linux Foundation Board Member. “We look forward to continued collaboration with The Linux Foundation to continue to deliver trainings to make customers’ lives easier.”

“As shown by The Linux Foundation and Dice’s Open Source Jobs Report, cloud computing skills are by far the most in demand by employers,” said Linux Foundation General Manager for Training & Certification, Clyde Seepersad. “This shouldn’t be a surprise to anyone, as the world today is run in the cloud. Azure is one of the most popular public clouds, and a huge portion of its instances run on Linux. That’s why we feel this new course is essential to give Azure professionals the Linux skills they need, give Linux professionals the Azure skills they need, and train new professionals to ensure industry has the talent it needs to meet the growing demand for Linux on Azure.”

LFS205 is taught by Sander van Vugt, a Linux professional living in the Netherlands and working for customers around the globe. Sander is an author of many Linux-related video courses and books, and instructor, as well as course developer for The Linux Foundation. He is also a managing partner of ITGilde, a large co-operative in which about a hundred independent Linux professionals in the Netherlands have joined forces.

The course is available to begin immediately. The $299 course fee provides unlimited access to the course for one year to all content and labs.


iconectiv now part of Linux Foundation’s open source networking collaboration project, including transition to NFV

$
0
0

iconectiv announced this week that it has joined The LF Networking Fund (LFN), a new open source networking initiative created by The Linux Foundation. The focus of LFN is to increase collaboration and operational excellence across networking projects, including Open Platform for NFV (OPNFV), to help deliver a new generation of services.

Through LFN the Linux Foundation has developed a cross-collaboration initiative that brings together over 100 member organizations from across the globe. These organizations consist of networking and enterprise vendors, system integrators and cloud providers working across nine of the top ten open-source networking projects focused on all aspects of the network stack, including NFV, SDN, data IO speed, automation, orchestration, and predictive network analytics.

The foundation expects this effort to build harmonization between open source and open standards, bringing together a range of emerging, network-dependent initiatives that will drive enhanced operational efficiency through shared development and deployment best practices and resources.

As a member of LFN, iconectiv brings more than 30 years of experience performing critical, behind-the-scenes work for service providers and related telecom firms, enabling the transition from physical to virtual network functions. Through its Common Language solution, iconectiv has already created unique codes for more than 1,000 virtual functions, demonstrating its ability to assist service providers in the transition to Network Function Virtualization (NFV).

On January 1, this year The LF Networking Fund (LFN) was founded as a new entity that increases collaboration and operational excellence across networking projects. LFN integrates the governance of participating projects in order to improve operational excellence and simplify member engagement. Each technical project retains its technical independence and project roadmaps.

The founding projects within LFN are FD.io, OpenDaylight, ONAP, OPNFV, PNDA, and SNAS.

“The move to hybrid networks that combine physical assets with virtual functions promises enhanced operational efficiencies and speed-to-market for new kinds of services for customers globally,” said Alex Berry, Executive Vice President, Information Solutions, iconectiv. “The Linux Foundation understands the successful integration of these two worlds will require the collaboration of network architects and operators. We intend to offer our intimate knowledge of interconnection and network and operations management to help ensure the full rollout and adoption of hybrid networks.”

“NFV is an important, highly complex and constantly evolving initiative for the industry; an open source NFV ecosystem developed by the world’s leading telecom experts, that integrates across the stack, benefits the entire industry,” said Arpit Joshipura, general manager, Networking, The Linux Foundation. “iconectiv brings invaluable expertise regarding network and operations management and the interconnection of global networks. We welcome their participation in this collaborative industry effort.”

 

The Linux Foundation adds together network automation and cloud native communities as network functions evolve to CNFs

$
0
0

The Linux Foundation announced on Wednesday further collaboration between telecom and cloud industry vendors enabled by the Cloud Native Computing Foundation (CNCF) and LF Networking (LFN), fueling migrations of Virtual Network Function (VNFs) to Cloud-native Network Functions (CNFs).

Early examples of both VNF and CNF enablement are seen within ONAP and via working projects from the CNCF and ONAP communities. ONAP’s inaugural release, Amsterdam, represents the second stage (2.0) of network architecture evolution: it runs in a VM, in an OpenStack, VMware, Azure or Rackspace environment. ONAP’s upcoming release, Casablanca, brings the next phase of network architecture evolution (3.0): it runs on Kubernetes, and works on any public, private, or hybrid cloud. ONAP currently supports VNFs on either VMs (running on OpenStack or VMware) or containers (running on Kubernetes via KubeVirt or Virtlet).

Specific projects addressing the migration roadmap to cloud native include LFN ONAP multi-VIM that aims to enable ONAP to deploy and run on multiple infrastructure environments, for example: OpenStack and its different distributions; public and private clouds and microservices containers. The LFN ONAP OOM enables ONAP modules to be run on Kubernetes, contributing to availability, resilience, scalability and more for ONAP deployments and sets the stage for full implementation of a microservices architecture, expected with the third release, Casablanca, due out later this year.

The latest OPNFV release, Fraser, delivers expanded cloud native NFV capabilities in nine different projects, more than doubled the number of supported Kubernetes-based scenarios, deployed two containerized VNFs, and integrated additional cloud native technologies from CNCF relating to service mesh (Istio/Envoy), logging (Fluentd), tracing (OpenTracing with Jaeger), monitoring (Prometheus), and package management (gRPC). These updates move the cloud native capabilities from basic container orchestration to include operational needs for cloud native applications. Additionally, the FastDataStacks project takes advantage of FD.io work to incorporate the VPP dataplane into Kubernetes networking capabilities to enable cloud native network-centric services.

The CNCF Cross-cloud Continuous Integration (CI) ensures cross-project interoperability and cross-cloud deployments of all cloud native technologies; shows the daily status of builds and deployments on a status dashboard; Istio allows users to connect, manage, and secure microservices for both containerized and non-containerized workloads; Ligato that provides a platform and code samples for development of cloud native VNFs. It includes a VNF agent for Vector Packet Processing (FD.io) and a Service Function Chain (SFC) controller for stitching virtual and physical networking; and (Network) Service Mesh, a novel approach solving complicated L2/L3 use cases in Kubernetes that are tricky to address with the existing Kubernetes Network Model. Inspired by Istio, Network Service Mesh maps the concept of a service mesh to L2/L3 payloads.

As telecom network transformation requires a hybrid approach, service providers will be better equipped to deliver next-gen services by realizing the full promise of containers, utilizing the best of both telecom and cloud. Combined with open source, ecosystem-wide benefits include portability, resiliency, reduced capex and opex, increased development velocity, automation, and scalability.  

As networks evolve to support next-generation services and applications, they will need to embrace characteristics inherent to cloud native architecture, such as scalability, automation, and resiliency. Compared to traditional VNFs (network functions encapsulated in a Virtual Machine (VM) running in a virtualized environment on OpenStack or VMware, for example), CNFs (network functions running on Kubernetes on public, private, or hybrid cloud environments) are lighter weight and faster to instantiate. Container-based processes are also easier to scale, chain, heal, move and back up.

Two of the fastest-growing Linux Foundation projects – ONAP (part of LF Networking) and Kubernetes (part of CNCF) – are coming together in next-generation telecom architecture as operators evolve their VNFs into CNFs running on Kubernetes.

“We have seen service providers embrace open source networking in large numbers. Benefits of virtualization and VNFs, coupled with automation platforms like ONAP, are now de-facto deployment models,” said Arpit Joshipura, General Manager, networking, The Linux Foundation. “As edge, IoT, 5G and AI start using these highly-automated cloud platforms, we are excited to see the best of both worlds come together – the scale and portability of cloud coupled with the agility, reliability and automation of telecom.”

“I’m thrilled to collaborate with our sister Linux Foundation organization, LF Networking, to demonstrate the capabilities of CNFs,” said Dan Kohn, Executive Director of Cloud Native Computing Foundation. “These implementations will bring greater elasticity to the networking space through critical pieces of the cloud native stack – like container orchestration, service mesh architectures and microservices – and allow for a new level of self-management and scalability.”

“Containerization has been one of the cornerstones of our network transformation,” said Catherine Lefevre, AVP of Research Technology Management, AT&T. “Cloud-native development represents the next level of efficiency as part of the ONAP target architecture and we’re excited to be a part of this initiative. We expect significant benefits from the OOM Project, such as improved scalability and resiliency, as well as additional cost efficiencies.”

“Cloud-native NFV delivers on the agility, velocity and cost savings promised so many years ago in the NFV manifesto. We are at the cusp of solving the two major blockers: VNF→ CNF transition, and a cloud-native way to wire the CNFs together in Kubernetes,” said David Ward, CTO and chief architect of Engineering, Cisco. “VPP provides the feature rich high performance userspace dataplane needed for CNFs, Ligato provides the toolkit for building the CNF agents to manage the VPP dataplane, and Network Service Mesh provides a truly ‘cloud-native’ approach to how to stitch CNFs together. We look forward to seeing the good work in these areas at Kubecon in Seattle in December.”

 

The Linux Foundation advances Ceph Foundation to manage data growth and information generated from cloud, container, AI applications

$
0
0

The Linux Foundation announced this week that over 30 global technology leaders are forming a new foundation to support the Ceph open source project community to manage the massive growth in data and information generated from cloud, container and AI applications.

The Ceph project develops a unified distributed storage system providing applications with object, block, and file system interfaces.

Founding Premier members of Ceph Foundation include Amihan, Canonical, China Mobile, DigitalOcean, Intel, OVH, ProphetStor Data Services, Red Hat, SoftIron, SUSE, Western Digital, XSKY Data Technology, and ZTE. The Ceph Foundation will organize and distribute financial contributions in a coordinated, vendor-neutral fashion for immediate community benefit. This will help galvanize rapid adoption, training and in-person collaboration across the Ceph ecosystem.

Ceph is used by cloud providers and enterprises, including financial institutions (Bloomberg, Fidelity), cloud service providers (Rackspace, Linode), academic and government institutions (Massachusetts Open Cloud), telecommunications infrastructure providers (Deutsche Telekom), auto manufacturers (BMW), software solution providers (SAP, Salesforce), and many more.

Founding members of the Ceph Foundation at the Premier level include Amihan, Canonical, China Mobile, DigitalOcean, Intel, OVH, ProphetStor Data Services, Red Hat, SoftIron, SUSE, Western Digital, XSKY Data Technology, and ZTE.

Ceph Foundation General members include Ambedded Technology, Arm, Catalyst Cloud, Croit GmbH, EasyStack, Intelligent Systems Services, Pingan Technology, QCT, Sinorail, and Xiaoju Science Technology.

Associate members include Boston University Information Services and Technology, CERN (European Organization for Nuclear Research), FAS Research Computing – Harvard, Greek Research and Technology Network (GRNET), Monash University, South African Radio Astronomy Observatory (SARAO), Science and Technology Facilities Council (STFC) at UK Research and Innovation (UKRI), and University of California Santa Cruz’s Center for Research in Open Source Software (CROSS).

Ceph is also used by Rook, a Cloud Native Computing Foundation project that brings seamless provisioning of file, block and object storage services into the Kubernetes environment, running the Ceph storage infrastructure in containers alongside applications that are consuming that storage.

Efficient, agile, and massively scalable, Ceph significantly lowers the cost of storing enterprise information in the private cloud and provides high availability to any object, file, and block data. Unstructured data makes up 80 percent and more of enterprise data, is growing at the rate of 55 percent to 65 percent per year, and is common with rich-media, predictive analytics, sensors, social networks, and satellite imagery.

Block and file storage are critical to any IT infrastructure organization and are important components of infrastructure platforms like OpenStack and Kubernetes. According to recent user surveys, roughly two-thirds of OpenStack clouds use Ceph block storage.

The growth of new cloud, container and artificial intelligence/machine learning applications are driving increased use of Ceph. For example, Ceph combined with analytics and machine learning enables enterprises to comb through mass amounts of unstructured data to spot patterns with customer behavior, online customer conversations and potential noncompliance scenarios.

“Ceph has a long track record of success when it comes to helping organizations with effectively managing high growth and expanding data storage demands,” said Jim Zemlin, Executive Director of the Linux Foundation. “Under the Linux Foundation, the Ceph Foundation will be able to harness investments from a much broader group to help support the infrastructure needed to continue the success and stability of the Ceph ecosystem.”

“A guiding vision for Ceph is to be the state of the art for reliable, scale-out storage, and to do so with 100 percent open source,” said Sage Weil, Ceph co-creator, project leader, and chief architect at Red Hat for Ceph. “While early public cloud providers popularized self-service storage infrastructure, Ceph brings the same set of capabilities to service providers, enterprises, and individuals alike, with the power of a robust development and user community to drive future innovation in the storage space. Today’s launch of the Ceph Foundation is a testament to the strength of a diverse open source community coming together to address the explosive growth in data storage and services.”

“Ceph was designed and built for scalability, initially with supercomputers and later with cloud infrastructure in mind. A key design premise was that the storage system needs to provide a highly reliable and available service in a dynamic and increasingly heterogeneous hardware environment where everything can potentially fail,” said Carlos Maltzahn of University of California, Santa Cruz, a co-founder of the research project that first created Ceph over a decade ago.

 

Viewing all 11 articles
Browse latest View live




Latest Images