Servers Archives | eWEEK https://www.eweek.com/servers/ Technology News, Tech Product Reviews, Research and Enterprise Analysis Tue, 07 Jun 2022 15:12:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.2 Google Cloud vs AWS 2022: Compare Features, Pricing, Pros & Cons https://www.eweek.com/cloud/aws-vs-google-cloud-platform/ https://www.eweek.com/cloud/aws-vs-google-cloud-platform/#respond Mon, 15 Mar 2021 20:45:00 +0000 https://www.eweek.com/uncategorized/google-cloud-vs-aws/ Selecting both primary and secondary cloud services is now a common IT strategy for most enterprises. Recent research shows that about 90 percent of enterprises and non-profit organizations are utilizing multiple cloud services accounts. This is a big change over what was happening a mere several years ago, when some companies were still reluctant to […]

The post Google Cloud vs AWS 2022: Compare Features, Pricing, Pros & Cons appeared first on eWEEK.

]]>
Selecting both primary and secondary cloud services is now a common IT strategy for most enterprises. Recent research shows that about 90 percent of enterprises and non-profit organizations are utilizing multiple cloud services accounts. This is a big change over what was happening a mere several years ago, when some companies were still reluctant to trust their business data in any cloud application. Today we offer a Google Cloud Platform vs. Amazon Web Services comparison.

Public cloud service providers such as Amazon Web Services, Microsoft Azure, Google, IBM, Dell EMC, Salesforce, Oracle and others are making it easier all the time for customers to come and go or add or subtract computing capacity or apps as needed. These and other providers also keep coming up with new and more efficient services for companies to use, many of which now feature artificial intelligence options to make them more usable for technical and non-technical employees alike.

In this article, we take a close look at two of the three largest cloud services providers in the world: Amazon Web Services and Google Cloud Platform. eWEEK uses research from several different sources, including individual analysts, TechnologyAdvice, Gartner, ITC, Capterra, IT Central Station, G2 and others.

What we’ll do here is compare at a high level and in a few different ways these two global cloud storage and computing services, so as to help you decide on the one that suits your company as the most cost- and feature-efficient one available.

Similarities and Differences of AWS vs. Google Cloud

To use an AWS service, users must sign up for an AWS account. After they have completed this process, they can launch any service under their account within Amazon’s stated limits, and these services are billed to their specific account. If needed, users can create billing accounts and then create sub-accounts that roll up to them. In this way, organizations can emulate a standard organizational billing structure.

Similarly, GCP requires users to set up a Google account to use its services. However, GCP organizes service usage by project rather than by account. In this model, users can create multiple, wholly separate projects under the same account. In an organizational setting, this model can be advantageous, allowing users to create project spaces for separate divisions or groups within a company. This model can also be useful for testing purposes: once a user is done with a project, he or she can delete the project, and all of the resources created by that project also will be deleted.

AWS and GCP both have default soft limits on their services for new accounts. These soft limits are not tied to technical limitations for a given service; instead, they are in place to help prevent fraudulent accounts from using excessive resources, and to limit risk for new users, keeping them from spending more than intended as they explore the platform. If you find that your application has outgrown these limits, AWS and GCP provide straightforward ways to get in touch with the appropriate internal teams to raise the limits on their services.

Resource management interfaces

AWS and GCP each provide a command-line interface (CLI) for interacting with the services and resources. AWS provides the Amazon CLI, and GCP provides the Cloud SDK. Each is a unified CLI for all services, and each is cross-platform, with binaries available for Windows, Linux, and macOS. In addition, in GCP, you can use the Cloud SDK in your web browser by using Google Cloud Shell.

AWS and GCP also provide web-based consoles. Each console allows users to create, manage, and monitor their resources. The console for GCP is located at https://console.cloud.google.com/.

Pricing processes are different

One area where there is not a notable difference between these two market leaders is in pricing. AWS uses a pay-as-you-go model and charges customers per hour—and they pay for a full hour, even if they use only one minute of it. Google Cloud follows a to-the-minute pricing process.

Many experts recommend that enterprises evaluate their public cloud needs on a case-by-case basis and match specific applications and workloads with the vendor that offers the best fit for their needs. Each of the leading vendors has particular strengths and weaknesses that make them a good choice for specific projects.

So, let’s get more specific.

What is Google Cloud Platform?

For the past 15 years, Google has been building one of the fastest, most powerful, and highest-quality cloud infrastructures on the planet. Internally, Google itself uses this infrastructure for several high-traffic and global-scale services, including Gmail, Maps, YouTube and Search. Because of the size and scale of these services, Google has put a lot of work into optimizing its infrastructure and creating a suite of tools and services to manage it effectively. GCP puts this infrastructure and these management resources at users’ fingertips.

Google Cloud new features for 2021

  • In July 2020, Google Cloud introduced Big Query Omni, a new multi-cloud analytics solution, powered by its hybrid and multi-cloud Anthos platform, that allows users to run the same database in multiple cloud and data center environments. The new package extends Google Cloud’s analytics platform to other public clouds without leaving the BigQuery user interface and without having to move or copy datasets. It’s available in private alpha for Amazon Web Services’ Amazon Simple Storage Service (S3), and support for Microsoft Azure is coming soon.
  • Also in July 2020, Google Cloud unveiled the first product of its confidential computing portfolio: new Confidential VMs that allow users to run workloads in Google Cloud while ensuring their data is encrypted while it’s in use and being processed, not just at rest and in transit. The solution, available in beta for Google Compute Engine, helps remove cloud adoption barriers for customers in highly regulated industries. The Confidential VMs are based on Google Cloud’s N2D series instances and leverage AMD’s Secure Encrypted Virtualization feature supported by its 2nd Gen AMD EPYC CPUs. Dedicated per-VM encryption keys are generated in hardware and are not exportable.
  • New Assured Workloads for Government, now in beta in Google’s U.S. regions, enable customers to automatically apply controls to their workloads, making it easier to meet security and compliance requirements for processing government data, including those concerning U.S. data locations and personnel access.
  • Customer to Community (C2C) is billed as an independent community where Google Cloud customers, including IT executives, developers and other cloud professionals, can connect, share and learn.Customers joining C2C, which is currently open to those in North America, Europe, the Middle East and Africa, will get access to exclusive networking opportunities, then ability to connect with other customers through virtual and in-person events, and expanded access to Google Cloud experts and content such as knowledge forums, white papers and methodologies. They’ll also receive early and exclusive access to Google Cloud product roadmaps and will be able to provide feedback and serve as customer-advisors

Defining GCP

Google Cloud was developed by Google and launched in 2008. It was written in Java, C++, Python including Ruby. It also provides the different services that are IaaS, PaaS and Serverless platform. Google cloud is categorized into different platforms, such as Google App Engine, Google Compute Engine, Google Cloud Datastore, Google Cloud Storage, Google Big Query (for analytics) and Google Cloud SQL. Google cloud platform offers high-level computing, storage, networking and databases.

It also offers different options for networking, such as virtual private cloud, cloud CDN, cloud DNS, load balancing and other optional features. It also offers management of big data and Internet of things (IoT) workloads. Cloud machine learning engine, cloud video intelligence, cloud speech API, cloud Vision API and others also utilize machine learning in Google cloud. Suffice to say there are numerous options inside Google Cloud, which is most often used by developers, as opposed to line-of-business company employees.

Google Regions and Zones

Nearly all AWS products are deployed within regions located around the world. Each region comprises a group of data centers that are in relatively close proximity to each other. Amazon divides each region into two or more availability zones. Similarly, GCP divides its service availability into regions and zones that are located around the world. For a full mapping of GCP’s global regions and zones, see Cloud Locations.

In addition, some GCP services are located at a multi-regional level rather than the more granular regional or zonal levels. These services include Google App Engine and Google Cloud Storage. Currently, the available multi-regional locations are United States, Europe and Asia.

By design, each AWS region is isolated and independent from other AWS regions. This design helps ensure that the availability of one region doesn’t affect the availability of other regions, and that services within regions remain independent of each other. Similarly, GCP’s regions are isolated from each other for availability reasons. However, GCP has built-in functionality that enables regions to synchronize data across regions according to the needs of a given GCP service.

AWS and GCP both have points of presence (POPs) located in many more locations around the world. These POP locations help cache content closer to end users. However, each platform uses their respective POP locations in different ways:

  • AWS uses POPs to provide a content delivery network (CDN) service, Amazon CloudFront.
  • GCP uses POPs to provide Google Cloud CDN (Cloud CDN) and to deliver built-in edge caching for services such as Google App Engine and Google Cloud Storage.

GCP’s points of presence connect to data centers through Google-owned fiber. This unimpeded connection means that GCP-based applications have fast, reliable access to all of the services on GCP, Google said.

Google Cloud Platform: Pros, cons based on user feedback

PROS: Users count heavily on Google’s engineering expertise. Google has an exemplary offering in application container deployments, since Google itself developed the Kubernetes app management standard that both AWS and Azure now offer. GCP specializes in high-end computing offerings such as big data, analytics and machine learning. It also provides considerable scale-out options and data load balancing; Google knows what fast data centers require and offer fast response times in all of its solutions.

CONS: Google is a faraway third-place in market share (8 percent; AWS is at 33 percent, Azure at 16 percent), most likely because it doesn’t offer as many different services and features as AWS and Azure. It also doesn’t have as many global data centers as AWS or Azure, although it is quickly expanding. Gartner said that its “clients typically choose GCP as a secondary provider rather than a strategic provider, though GCP is increasingly chosen as a strategic alternative to AWS by customers whose businesses compete with Amazon, and that are more open-source-centric or DevOps-centric, and thus are less well-aligned to Microsoft Azure.”

This is a high-level comparison of two of the top three major cloud service leaders here in 2021. We will be updating this article with new information as it becomes available, and eWEEK will also be examining in closer detail the various services—computing, storage, networking and tools—that each vendor offers.

What is AWS?

Amazon Web Services (AWS) is a cloud service platform from Amazon, which provides services in different domains such as compute, storage, delivery and other functionality which help the business to scale and grow. AWS utilizes these domains in the form of services, which can be used to create and deploy different types of applications in the cloud platform or migrate apps to the AWS cloud. These services are designed in such a way that they work with each other and produce a scalable and efficient outcome. AWS services are categorized into three types: infrastructure as a service (IaaS), software as a service (SaaS) and platform as a service (PaaS). AWS was launched in 2006 and become the most-purchased cloud platform among currently available cloud platforms. Cloud platforms offer various advantages such as management overhead reduction, cost minimization and many others.

Important new AWS features for 2021

  • AWS Control Tower now includes an organization-level aggregator, which assists in detecting external AWS Config rules. This will provide you with visibility in the AWS Control Tower console to see externally created AWS Config rules in addition to those AWS Config rules created by AWS Control Tower. The use of the aggregator enables AWS Control Tower to detect this information and provide a link to the AWS Config console without the need for AWS Control Tower to gain access to unmanaged accounts.
  • Amazon Elastic Container Service (Amazon ECS) has launched a new management console. You can now create, edit, view, and delete Amazon ECS services and tasks, and view ECS clusters in fewer simpler steps. You can also learn about ECS capabilities and discover your ECS resources quickly and easily in the new console, as well as switch back to the existing console if needed. The new console will be continuously updated until all functionality from the existing console is available, and both consoles will be available to use until this time.
  • AWS IoT SiteWise Monitor now supports AWS CloudFormation, enabling customers to create and manage AWS IoT SiteWise Monitor resources such as portals, projects, dashboards, widgets, and properties using CloudFormation.
  • AWS Data Exchange Publisher Coordinator and AWS Data Exchange Subscriber Coordinator are new AWS Solutions Implementations that automate the publishing and consumption of data via AWS Data Exchange.
  • As of Jan. 1, 2021, users now can use additional controls on their Amazon WorkDocs Android application that enable them to execute workflows such as deleting, renaming, and adding files and folders to their Favorite list directly from the Folder List view. They can also rename as well as add a file or folder to a Favorite list for quick access and offline use from the Document Preview view. These additional controls now surfaced from the Folder List and Document Preview view further facilitate content collaboration for teams.

AWS Pros and Cons, Based on User Feedback

PROS: Amazon’s single biggest strength really turned out to be the fact that it was first to market in 2006 and didn’t have any serious competition for more than two years. It sustains this leadership by continuing to invest heavily in its data centers and solutions. This is why it dominates the public cloud market. Gartner Research reported in its Magic Quadrant for Cloud Infrastructure as a Service, Worldwide, that “AWS has been the market share leader in cloud IaaS for over 10 years.” Specifically, AWS has been the world leader for closer to 15 years, or ever since it first launched its S3 (Simple Storage Service) in fall 2006.

Part of the reason for its popularity is certainly the massive scope of its global operations. AWS has a huge and growing array of available services, as well as the most comprehensive network of worldwide data centers. Gartner has described AWS as “the most mature, enterprise-ready (cloud services) provider, with the deepest capabilities for governing a large number of users and resources.”

CONS: Cost and data access are Amazon’s Achilles heels. While AWS regularly lowers its prices—in fact, it has lowered them more than 80 times in the last several years, which probably means they were too high for starters. Many enterprises find it difficult to understand the company’s cost structure. They also have a hard time managing these costs effectively when running a high volume of workloads on the service. And customers, beware: Be sure you understand the costs of extracting data and files once they are in AWS’s storage control. AWS will explain it all upfront for you, but know that it’s a lot easier to start a process and upload files into the AWS cloud and access apps and services than to find data and files you need and move them to another server or storage array.

In general, however, these cons are outweighed by Amazon’s strengths, because organizations of all sizes continue to use AWS for a wide variety of workloads.

Go here to see eWEEK’s listing of the Top Cloud Computing Companies.

Go here to read eWEEK’s Top Cloud Storage Companies list.

This article is an update of a previous eWEEK study by Chris Preimesberger from 2019.

The post Google Cloud vs AWS 2022: Compare Features, Pricing, Pros & Cons appeared first on eWEEK.

]]>
https://www.eweek.com/cloud/aws-vs-google-cloud-platform/feed/ 0
IBM Boosts Its Hybrid Cloud with New Power Systems, Red Hat Features https://www.eweek.com/servers/ibm-boosts-its-hybrid-cloud-with-new-power-systems-red-hat-features/ Fri, 26 Feb 2021 02:30:33 +0000 https://www.eweek.com/?p=218424 Vendor discussions of cloud and hybrid cloud computing typically follow a “go where you know” trajectory. That is, server and silicon vendors pitch new and different cloud-focused hardware functions while software and services players focus on new applications and tools that make life easier for cloud-bound code writers, developers and data center staff. However, when […]

The post IBM Boosts Its Hybrid Cloud with New Power Systems, Red Hat Features appeared first on eWEEK.

]]>
Vendor discussions of cloud and hybrid cloud computing typically follow a “go where you know” trajectory. That is, server and silicon vendors pitch new and different cloud-focused hardware functions while software and services players focus on new applications and tools that make life easier for cloud-bound code writers, developers and data center staff. However, when it comes to hybrid cloud, IBM’s efforts are in an entirely different class.

IBM is the only systems vendor still developing its own silicon (Oracle might disagree but its SPARC CPUs haven’t been updated since the arrival of M8 in 2017) and optimizing resulting servers for hybrid clouds. Additionally, IBM has sizable portfolios of home-grown enterprise operating systems (AIX, IBM i and z/OS), middleware and business applications it can bring to bear for cloud-based services. Finally, the company’s decades-long support of Linux (the lingua franca of cloud) resulted in strategic partnerships with major open-source vendors, as well as IBM’s 2019 acquisition of Red Hat which has its own substantial cloud-enabling technologies and services.

What this all means for enterprise customers was made abundantly clear in the new Power Systems and Red Hat offerings IBM introduced this week. Let’s consider that announcement.

Removes the friction from cloud-enabled hardware

On the hardware side, IBM introduced two Power Systems offerings:

  • An updated IBM Power Systems Private Cloud solution, an on-premises private cloud that can be scaled from one core with 256GB of memory to vast systems supporting thousands of VMs. IBM has enhanced the new offering by increasing the maximum number of VMs per pool from 1,000 to 1,500 and the number of systems per pool from 32 to 48. The new solution is also capable of monitoring/displaying minutes of usage for specific Linux distributions, thus simplifying resource consumption tracking and management.
  • The new IBM Power Private Cloud Rack solution is a pre-configured on-premises system for supporting Red Hat OpenShift. Based on Linux, IBM AIX or IBM i, the solution is designed to help operationalize their hybrid cloud environments. In addition, organizations can use the Private Cloud Rack as an IaaS environment to speed the development and operation of Kubernetes container-based cloud-native applications via the Red Hat OpenShift Container Platform. According to IBM, the Power Private Cloud Rack can deliver 49% lower cost per request compared to similarly equipped x86-based platforms.

IBM has also extended its Power Private Cloud with Dynamic Capacity function which enables customers using Power Systems Private Cloud solutions to unlock additional compute cores as needed and get cloud-like consumption-based pricing. IBM is extending that ability to hybrid cloud environments with hybrid capacity credits, which can be purchased and used to unlock capacity in on-premises IBM POWER9-based servers and IBM Power Virtual Servers on IBM Cloud. The company is also working with ecosystem partners to extend dynamic capacity across multiple Linux distributions.

Finally, IBM announced that AIX 7.3 (which is planned for GA in Q4 2021) will feature new continuous computing, scalability, security and automation capabilities, including some designed specifically for hybrid cloud environments.

Enhance Red Hat for IT modernization and cloud-native development

IBM has also expanded Red Hat capabilities on Power Systems solutions. They include:

  • Red Hat OpenShift on IBM Power Systems Virtual Server: The IBM Power Virtual Server is an enterprise Infrastructure-as-a-Service offering built around IBM POWER9 and offering access to over 200 IBM Cloud services. The Red Hat OpenShift container platform is now available on IBM Power Virtual Server, enabling clients to leverage OpenShift to deploy agile hybrid clouds. In addition, IBM Power Virtual Server clients can now run leading business applications like SAP HANA in an IBM POWER9-based cloud.
  • Red Hat Runtimes on IBM Power Systems: Red Hat Runtimes, a set of products, tools and components designed to develop and maintain cloud-native applications, is now supported on IBM Power Systems. As a result, developers creating cloud-native applications have access to leading open source frameworks and runtimes that offer a single development experience for hybrid applications spanning IBM Power Systems and other platforms.
  • New Red Hat Ansible Content Collections: Red Hat’s Ansible Automation Platform (which was made available on IBM Power Systems last year) provides an open-source platform for simplifying automation of common IT tasks. Adding to an already extensive set of Ansible modules for IBM Power Systems, IBM has created 22 new Ansible modules since the start of the year that automate common tasks, like patch management, security management, operating system and application deployment, continuous delivery, centralized backup and recovery, and virtualization management and provisioning. Currently, there are 102 Ansible modules that support IBM POWER available to the open-source community on GitHub. Many are available as production-ready, enterprise-hardened and certified Ansible Collections via the Red Hat Ansible Automation Platform.

Final analysis

Many tech vendors “go where they know” in terms of cloud computing, providing solutions designed to address narrowly focused solutions or highly specific use cases. In contrast, IBM knows where it’s going in relation to virtually any hybrid cloud destination. The company’s deep experience in and broad array of silicon, server, storage, networking, OS, middleware, software, developer and open source technologies means that it can assist cloud-bound customers with whatever goals they aim to achieve or challenges they encounter.

These new and improved Power Systems and Red Hat solutions are merely the latest examples of the company’s clear-eyed focus on hybrid cloud. We expect IBM to continue delivering powerful, useful hybrid cloud solutions for many years to come.

Charles King is a principal analyst at PUND-IT and a regular contributor to eWEEK. He is considered one of the top 10 IT analysts in the world by Apollo Research, which evaluated 3,960 technology analysts and their individual press coverage metrics. © 2020 Pund-IT, Inc. All rights reserved.

The post IBM Boosts Its Hybrid Cloud with New Power Systems, Red Hat Features appeared first on eWEEK.

]]>
How CEO Swan Set Up New CEO Gelsinger for Future Success at Intel https://www.eweek.com/it-management/how-ceo-swan-set-up-gelsinger-for-future-success-at-intel/ https://www.eweek.com/it-management/how-ceo-swan-set-up-gelsinger-for-future-success-at-intel/#respond Fri, 22 Jan 2021 23:33:00 +0000 https://www.eweek.com/uncategorized/how-ceo-swan-set-up-gelsinger-for-future-success-at-intel/ When looking at Intel’s earnings, you’d almost think we weren’t in a pandemic. The company set record revenue, exceeding guidance by a whopping $2.6 billion and returned $19.8 billion to shareholders, who could likely use the financial help. [Editor’s note: See additional analysis at the end of this article from Patrick Moorhead.] But the handoff […]

The post How CEO Swan Set Up New CEO Gelsinger for Future Success at Intel appeared first on eWEEK.

]]>
When looking at Intel’s earnings, you’d almost think we weren’t in a pandemic. The company set record revenue, exceeding guidance by a whopping $2.6 billion and returned $19.8 billion to shareholders, who could likely use the financial help. [Editor’s note: See additional analysis at the end of this article from Patrick Moorhead.] But the handoff between Swan and Gelsinger is unlike most of its type, in which the incoming CEO has to save a failing company. That isn’t the case here; Swan has done a phenomenal job returning Intel to a company that is on solid financial ground, allowing Gelsinger to do his magic. 

And Pat isn’t standing Pat. He is already recruiting ex-Intel superstars in a very unusual move that should be considered a best practice. I would argue this move supports Gelsinger being named Glassdoor’s CEO of the year

The goal of a turnaround CEO

Turnaround CEOs are those brought in to fix a company so severely broken that it can’t rely on the succession plan to replace a poorly performing CEO. They tend to come in two classes: those who prep a company for sale, pretty much gutting it to make its financials look better; and those who fix the company’s structure so that it again can be successful long term. I worked for Louis Gerstner at IBM, one of the latter, and watched those in the former group destroy companies. Fortunately, Bob Swan was one of the excellent turnaround CEOs. In record time, he got Intel back into shape so that a more traditional strategic CEO like Gelsinger could take over and assure the firm’s long-term future. 

The turnaround process is critical because turnaround CEOs are very different from operational CEOs. Think of this in terms of auto racing, when you have a car that just isn’t working. You could get the best driver on the planet, and you’d still lose. Your first job is to fix the car so it works, then you can get a top driver to win the race. Swan, in this analogy, is the mechanic; Gelsinger is the driver. 

Even though he doesn’t start until next month, Gelsinger, who put in 30 years at Intel early in his career, has already started making waves by doing brilliant things. 

Winning the race

Anyone that has ever raced professionally knows that to win, you need a champion crew. But one of the problems that Swan’s predecessor Brian Krzanich created was stripping the company of many of its most capable people. Now, typically (particularly at Intel), the new CEO wouldn’t look at former employees as potential resources. Common complaints are they have been away too long, but the real reason is that they may upset others’ advancements because the folks who were let go were more qualified than those who stayed. Equally common is that the employees who were let go represented a threat to some better-positioned senior manager who doesn’t want that threat to come back. 

A new CEO needs to build a team loyal to him or her. However, if he builds it with new employees, the people below those employees may not be loyal, and there is an even greater learning curve for the resulting team. But if you bring back people who were poorly treated and take care of them, they’ll be loyal, the people below them will know and trust them, and they can hit the ground running because they know how to work at the firm. One of the significant impediments to a new employee is understanding how things uniquely work in the company, and an ex-employee doesn’t have that learning curve. 

Gelsinger has raised some eyebrows by breaking with tradition and going after ex-employees, but I think this should be a sustaining practice. Intel has one of the strongest alumni associations of any firm, and that asset has been underutilized for virtually the entire history of the company. Pat’s moves here suggest that may change, and that change could further ensure Intel’s future. 

Wrapping Up

Unlike what we just saw in Washington, where President Biden was handed a country badly broken by his predecessor, Bob Swan has done Pat Gelsinger a solid favor and given him a version of Intel that was vastly better than he found it. Gelsinger  is already thinking out of the Intel box to ensure a positive outcome for his tenure and showcase that he is indeed the perfect CEO to take Intel into the future. 

That is excellent news for Intel’s stockholders, employees and customers, and it showcases how things should be done. 

Rob Enderle is a principal at Enderle Group. He is a nationally recognized analyst and a longtime contributor to eWEEK and Pund-IT. Enderle is considered one of the top 10 IT analysts in the world by Apollo Research, which evaluated 3,960 technology analysts and their individual press coverage metrics.

—————————————————

Editor’s additional sidebar: President and Principal Analyst Patrick Moorhead of Moor Insights & Strategy adds some perspective into Intel’s quarterly earnings report here: 

“Even in the midst of Intel’s 7nm manufacturing challenges, the company pulled off a phenomenal Q4, significantly exceeding expectations by $20B in revenue and $1.52 EPS. Tiger Lake demand looks strong on the PC side and I think, based on its 33% growth, likely gained market share. While the data center business did better than expected, it was weighed down by the cloud ingestion cycle, competition and continued decline in enterprise and government purchases. Mobileye was a huge standout, driving 39% quarterly revenue growth and 93% improvement in profitability. Mobileye is on its way to be an over-billion dollar annualized business, a real accomplishment. The company is forecasting a Q1 revenue decline, but keep in mind, that does not include memory business it is spinning off to Hynix. We’ll have to see if the company says anything about 7nm and outside foundry use, but it wasn’t mentioned in the release.”

The post How CEO Swan Set Up New CEO Gelsinger for Future Success at Intel appeared first on eWEEK.

]]>
https://www.eweek.com/it-management/how-ceo-swan-set-up-gelsinger-for-future-success-at-intel/feed/ 0
New President Will Need to Scrutinize U.S.-China Relations for IT https://www.eweek.com/it-management/new-president-will-need-to-scrutinize-u-s-china-relations-for-it/ https://www.eweek.com/it-management/new-president-will-need-to-scrutinize-u-s-china-relations-for-it/#respond Wed, 20 Jan 2021 10:56:07 +0000 https://www.eweek.com/uncategorized/new-president-will-need-to-scrutinize-u-s-china-relations-for-it/ The algorithm Donald Trump applied to the policies of his predecessor, Barack Obama, was simple: anything Obama had done, Trump rescinded, canceled or did the opposite. Joe Biden might be tempted to play turnabout with Trump, and in many cases that would make a lot of sense. In matters relating to areas such as the […]

The post New President Will Need to Scrutinize U.S.-China Relations for IT appeared first on eWEEK.

]]>
The algorithm Donald Trump applied to the policies of his predecessor, Barack Obama, was simple: anything Obama had done, Trump rescinded, canceled or did the opposite. Joe Biden might be tempted to play turnabout with Trump, and in many cases that would make a lot of sense. In matters relating to areas such as the Paris Agreement on climate change, policies with respect to immigrants and adherence to the Foreign Emoluments Clause of the Constitution, Biden would be perfectly correct in reversing course.

But, from an industrial policy perspective, it would be worth examining closely how to deal with China. China was one of the large national rivals that took advantage during the past four years of the Trump administration’s inability to keep its eye on the ball. Russia found our blind spots like an expert squash player dropping a shot where his opponent isn’t, using Trump’s (almost) inexplicable softness toward Vladimir Putin to promote its agenda. While we were watching for election disruption, Russian hackers popped open SolarWinds like a can of Mountain Dew and entered the computer networks of thousands of organizations. Iran and North Korea edged merrily toward greater nuclear capabilities. But China, among them all, boldly and directly, went about the business of displacing the United States in as many domains as possible, maneuvering itself into position to become the next great industrial and military superpower.

China: Long an IT factory for the U.S.

Nowhere was this propensity more in evidence than in the domain of technology, where China has long been a factory for the United States and others, making what we design. That relationship was deeply interwoven when Trump took office, and he and his trade representatives spent a lot of time and energy tearing it apart. Luckily, individuals such as Apple CEO Tim Cook knew how to hit just the right soothing notes when speaking to Trump and managed to talk him down off some of the most potentially damning moves. But in general, trade policy devolved into an escalating tariff war, whose main consequence was to slow down the velocity of the technology industry, particularly the sectors involved in hardware manufacturing.

It might be tempting to go back to the good old days and just tie back together those frayed trade links. The industries in both countries benefited handsomely from the arrangement and likely would again.

And yet, something was never quite right about the China relationship, even in the supposedly good old days. It was rather one-sided. The Chinese government often required U.S. companies to give their Chinese business partners a majority stake in joint ventures and sometimes also stipulated technology transfer. When IBM created the OpenPower Foundation in 2013, a raft of Chinese companies signed up as members. IBM was no longer able to support its own silicon development and so essentially gave away powerful processor technology to anyone capable of running with the ball. Since then, Power technology has become a core component of the Made in China 2025 plan.

Although recent reports indicate that Chinese firms are experiencing setbacks in their pursuit of technological independence, the United States can take only cold comfort there. China is far more unified in its quest than any Western nation. Its government, industry and financial sector are all working together to learn from these impediments and move on.

Coming 5G deployments make the stakes even higher

So, should we cooperate? Shun? Compete? Some combination? There’s quite a lot of devilry in the details, and the stakes are particularly high on the eve of widespread deployment of 5G wireless communications technology. The United States has a leadership position in 5G. But so does China. U.S.-based Qualcomm is clearly the king of 5G handsets, supplying everyone from Apple to Samsung. But China-based Huawei is the leader in 5G base stations and not just in China. Germany is investing heavily in Huawei equipment, and other nations want in as well.

Striking the right balance will be a bit of a trick.

How do we make sure we’re protecting our intellectual property without choking off our markets? We don’t want to throw out the grain with the chaff. We don’t want to make it hard to sell chips to China on the eve of the next big telecommunications upgrade. Huawei wants to buy 5G products from Qualcomm, which sold the Chinese company a big pile of 4G products and has a long relationship there. If Huawei becomes a total pariah in the eyes of the U.S. government, it’s U.S. companies that will be left out in the cold. Huawei will simply buy 5G products from Samsung.

But this story is much bigger than Huawei. It’s about a clash between two governments and their differing approaches to industrial policy. From our perspective, the Chinese government has not played by the rules. But viewed from the perspective of international rivalry, Huawei belongs to the same class of “frenemy” as Samsung.

Meanwhile, economically, China is on a roll. Everyone except China is reeling from COVID-19. Having used its centrally controlled society to effect a hard shutdown early, China is nearly back to normal. This is not a market for the U.S. to let get away.

Chip value still resides in the U.S.

So, where should we end up in all this?

I would say that we’re actually in a pretty good position if we play our cards right. In the chip industry, most of the value is still created in the United States. Most U.S. firms have moved to a “fabless” model, wherein they do the design work and leave the manufacturing to someone else, often Taiwan Semiconductor Manufacturing Company (TSMC). Even TSMC recently agreed to build a leading-node factory here in the United States. It’s the tens of thousands of U.S.-based chip designers that set us apart from other nations, with something like 80% of the value in a chip being created right here.

The technology revolution led by 5G will affect a cascade of changes throughout society, from the reinvention of health care as we emerge from the pandemic to bringing back jobs for small and midsize businesses. It’s technology that will enable all those new jobs and business opportunities. 5G will enable the economy to keep going during the pandemic.

We do have to think carefully about how the supply chain operates. While we don’t want to be over-invested in end-stage production, there is a national security component to letting others make our chips. We don’t want to be beholden to Asia, given the current trade war, pandemic and the unknowable outcome of the conflict between Taiwan and China. From that perspective, it would be great to bring some of our chip manufacturing home. At this time, Intel is the only semiconductor firm still making large quantities of chips on U.S. soil.

Innovative companies need to be protected

From a policy perspective, the new administration needs to think in terms of bringing back the knowledge economy as everything moves to digitization and connection. The government needs to nurture not just 5G, but a wide range of technologies, in which we can be the innovators, building new platforms and ecosystems that allow us to grow despite the competition.

We’re not going to win building cheap chip businesses against low-labor-cost Asian industries that accept lower margins. We need to cultivate our leadership in the innovation economy, paving a path for workers in the millennial generation and beyond.

We have the advantage here, but we need to maintain it by protecting innovative companies while remaining vigilant against foreign competitors that often have explicit backing from their own governments.

Roger Kay is affiliated with PUND-IT Inc. and a longtime independent IT analyst.

The post New President Will Need to Scrutinize U.S.-China Relations for IT appeared first on eWEEK.

]]>
https://www.eweek.com/it-management/new-president-will-need-to-scrutinize-u-s-china-relations-for-it/feed/ 0
Top Data Center Managed Services Vendors for 2022 https://www.eweek.com/cloud/top-data-center-managed-services-vendors-for-2021/ https://www.eweek.com/cloud/top-data-center-managed-services-vendors-for-2021/#respond Tue, 19 Jan 2021 23:55:00 +0000 https://www.eweek.com/uncategorized/top-data-center-managed-services-vendors-for-2021/ As the growing COVID-19 pandemic has pushed more and more companies into cloud services and remote collaboration applications they hadn’t planned to encounter quite this soon, the enterprise managed services market has had to react in a big way. The sector, already splintered in the last several years as the IT business itself has grown […]

The post Top Data Center Managed Services Vendors for 2022 appeared first on eWEEK.

]]>
As the growing COVID-19 pandemic has pushed more and more companies into cloud services and remote collaboration applications they hadn’t planned to encounter quite this soon, the enterprise managed services market has had to react in a big way. The sector, already splintered in the last several years as the IT business itself has grown and developed into more specific products and services, has been pounded with new business requests as a result of the sea change in work-from-home and remote-employee use cases.

Analysts have estimated that about 50 percent of formerly “officed” employees started working from home in 2020 and that after the pandemic subsides, about the same number will continue to work from someplace other than their original offices. This all weighs heavily on both cloud and on-premises-based IT services, and somebody with a data center has to provide them.

The separate sub-categories of cloud managed services include:

  • Managed Security Services
  • Managed Network Services
  • Managed Communication and Collaboration Services
  • Managed IT Infrastructure
  • Managed Data Center Services
  • Managed Mobility Services
  • Managed Information Services

A managed service provider of any type is a vendor that provides information technology services on a 24/7 contract basis.

  • Cloud-based managed services provisioning is a relatively new business model (last dozen years or so) which takes an enterprise-wide, strategic approach to IT management and monitoring.
  • Data center managed services comprises several stakeholders, such as service providers, system integrators, technology partners, consulting firms, research organizations, resellers and distributors, enterprise users and technology providers to offer a long-range, strategic approach to IT application management and monitoring.

Here are eWEEK’s Top Managed Service Vendors for data centers in 2021.

IBM

Armonk, N.Y.

IBM is, as usual, consistent and, well, big. Big Blue was one of the original IT data center services providers and has maintained its market-share lead as tops in the world in revenue for more than a decade. It continues to provide the widest range of managed services offerings on the market, selling about $7.65 billion worth of them in 2019, according to company documents. IBM also provides a skills and training program through which IT professionals augment their existing skills and learn new ones.

Naturally, IBM recommends using its own hardware and software to implement this, although it does employ an open-standards approach that will take into account existing hardware and software investments by its customers.

Key values/differentiators:

  • Considered a one-stop shop for enterprises in adding managed services to their IT systems.
  • IBM claims its managed services help increase business agility due to a consumption-based approach—you only pay for what you use.
  • With built-in security, including alternate-site disaster recovery for the most critical workloads, users can safeguard data and applications.
  • IBM says its system enables the scalability needed to avoid downtime and performance problems, run in a security-rich environment, and minimize infrastructure cost and complexity.

Who uses it: Midrange to large enterprises
How it works: subscription cloud services, physical on-prem devices and services

Accenture

Dublin, Ireland

Accenture is one of the most respected IT integrators and consultants in the world and owns an excellent reputation for speed and quality. This is a global management consulting firm that offers a range of services and solutions in strategy, consulting, tech and operations. Accenture ranks with IBM as the two largest and most well-known companies on this list when it comes to management consultancy. Accenture’s goal is to collaborate with clients to help them become high-performance businesses and governments.

Key values/differentiators:

  • Accenture is global, with approximately 210,000 people serving clients in more than 120 countries. The sheer size and reach of the global company is a huge selling point for many potential multinational clients.
  • Accenture ihas been developing its own software for years and recently deployed new artificial intelligence testing services. The only thing it doesn’t do much of is hardware, but it partners with just about everybody to acquire whatever is needed for a data center implementation.
  • Accenture combines years of experience, comprehensive capabilities across all industries and business functions, and extensive research on the world’s most successful companies.
  • Pros: Known for stabilization, industrialized processes, minimal major outages, flexibility on changes in architecture
  • Con: Current service desk needs to mature to modern levels, resource quality/attrition in offshore centers.

Who uses it: SMBs to large enterprises
How it works: subscription cloud services, physical on-prem devices and services

Infosys

Bangalore, India

While “outsourcing” may be considered a dirty word in some U.S. circles, Infosys doesn’t shy away from it. Infosys is a longtime international leader in consulting, technology outsourcing and next-generation services and is proud of it. Infosys says it provides enterprises with strategic insights on what lies ahead. The company enables clients in more than 50 countries; it claims its mission is to help enterprises renew themselves while also creating new avenues to generate value. Infosys claims to help enterprises transform in a changing world through strategic consulting, operational leadership and the co-creation of breakthrough solutions, including those in mobility, sustainability, big data and cloud computing.

Infosys is excellent at retaining its customers. More than 95 percent of its $45 billion in annual revenue comes from repeat business. Infosys has a growing global presence with more than 187,000+ employees.

Key values/differentiators:

  • Infosys has expertise in virtually all sectors of information technology, which is a key requirement in data center migration projects.
  • Infosys is flexible in working with a customer to scale up or down their resources and is able to assist providing the best financial options for the customer for each project. This can include deferred payments; ensuring the invoices meet the customer requires and negotiating on rates.
  • Having a large resource pool allows Infosys to quickly provide personnel with experience is all areas of the IT business.
  • Ample resources with the ability to escalate as required, 24/7 monitoring and very good reporting.

Who uses it: SMBs to large enterprises
How it works: subscription cloud services, physical devices and services

Fujitsu

Tokyo, Japan

Fujitsu has been among the top 10 data center managed services providers for most of the last two decades. It provides capabilities in various IT domains, such as IoT, edge computing, process automation, mobility and others. Fujitsu claims to provide stable service and strives to be a longtime business partner. Fujitsu is one of the leading Japanese information and communication technology (ICT) companies, offering a full range of technology products, solutions, and services. Approximately 140,000 Fujitsu people support customers in more than 100 countries. The company uses its experience and the power of ICT “to shape the future of society” with their customers.

Key values/differentiators:

  • Clients report Fujitsu that generally is very client-focused and provides a high-quality service that is of great value. Staff are a pleasure to work with and have knowledge and expertise. Most implementations have been a success, users report.
  • Generally known as stable, cooperative, innovative company with which to work
  • Customers have reported that the delivery team is very strong, thinking with the customer and working toward achieving true customer satisfaction

To take under advisement:

  • Contract negotiation in sole sourcing mode can be complex and rigid
  • Project pricing, lead times, complexity of organization can be problematic

Who uses it: Midrange companies to large enterprises
How it works: subscription cloud services, physical devices and services

Atos

Bezons, Ile-de-France, France

Atos is one of the youngest companies on this list and might be the most forward-thinking one as well. This, of course, comes as a benefit to a new company that looks at the competition and endeavors to find its own solution improvements. Atos has moved into quantum computing with the launch on an Intel-based emulator; the idea is to use the emulator to train its coders in the skills that will be needed in the future when actual quantum computers will be used for many tasks. Although it may now be among the less-famous tech companies, it earned total revenues of $20 billion last year, which is reasonably high compared with others.

Key values/differentiators:

  • Atos is a strategic partner in every sense of the word, adapting to its clients’ business changes. It is known to consistently meet or exceed service-level agreements.
  • The Atos platform is recognized as being among global industry leading providers; strong service management; providers’ expertise It is good at improving compliance and risk management, cost optimization
  • Atos is good at creating internal/operational efficiencies
  • Top-flight functional capabilities
  • Excellent industry expertise

Who uses it: SMBs to large enterprises
How it works: subscription cloud services, physical devices and services

Rackspace

San Antonio, Texas

Founded in 1998, Rackspace has been there since the beginning of data center cloud services—in fact, since the ASP (application service providers) days. The company provides hybrid cloud-based services that enable businesses to run their workload in a public or private cloud. Rackspace’s engineers deliver specialized expertise on top of leading technologies developed by OpenStackMicrosoftVMware, and others through a service known as Fanatical Support. It has more than 300,000 customers worldwide, including two-thirds of Fortune 100 companies.

Key values/differentiators:

  • Rackspace has named a leader in the Gartner Magic Quadrant for Cloud-Enabled Managed Hosting
  • Rackspace gets high marks from users for the following:
  1. Onboarding and transition from incumbent provider for DevOps, Infrastructure and Ops
  2. Educating customers’ internal teams on Rackspace’s capabilities and support processes
  3. Internal collaboration between FAWS support and Rackspace Managed Security, particularly for contracts/terms/cost.
  4. Incident response from the Rackspace tech team (i.e.: Tech Acct Mgr, Tech lead) has been good Critical
  5. Team collaboration around infrastructure strategies for scaling (up/down) and executing on the plan.

Who uses it: SMBs to large enterprises
How it works: subsc
ription cloud service, physical devices and services

Cognizant

Teaneck, N.J.

Cognizant turned 26 in 2020 and remains one of the world’s leading data center professional services providers—although it’s not as well known as Accenture, IBM and others. It is in the field of transforming clients’ business, operating and technology models for the digital era. Its industry-based, consultative approach helps clients envision, build and run more innovative and efficient data centers. Cognizant is ranked in the Fortune 200 and is consistently listed among the most admired companies in the world.

Key values/differentiators:

  • Cognizant’s home-built software is stable and modern. Data centers are top-notch operations in geographically redundant locations.
  • Cognizant is a highly flexible and collaborative partner; account executives are very transparent and work to do “the right thing.”
  • Early on with Cognizant, processes were lacking, but it has since improved; they were minor issues but were handled with urgency

To take under advisement:

  • Project management lacking at times due to lower investment in this area than should be provided in a steady state operations with an SOW in place.

Who uses it: SMBs to large enterprises
How it works: subscription cloud services, physical devices and services

Tata Consultancy Services

Mumbai, India

Tata Consultancy Services is a multinational information technology services, business solutions and consulting company that competes on a global level with all the major data center consultancies. TCS offers an integrated consulting-led portfolio of IT-enabled services comprising application development and maintenance, business intelligence, enterprise solutions, engineering and industrial services and infrastructure services delivered through its own Global Network Delivery process. Based in Mumbai, India, it was founded in 1968.

The company has domain expertise in a wide set of industries, comprising banking and financial services, insurance, telecom, manufacturing, retail and distribution, high tech, life sciences, health care, transportation, energy and utilities, media and entertainment and others.

TCS operates in five continents, with North America and Europe constituting the largest markets for its services. It derives more than 20 percent of its revenues from emerging markets such as India, Asia-Pacific, Latin America and Middle-East & Africa.

Key values/differentiators:

  • Capable of handling virtually any data center building, integration or modernization project anywhere on Earth
  • Known for flexibility in contracting and servicing
  • Utilizes a very smooth operations model
  • Company-wide focus on joint effort to get projects done in team style is excellent
  • Adding expertise in AI and ML, so services will be improved in the future

To take under advisement:

  • More standard technology services and working methods needed

Who uses it: SMBs to large enterprises
How it works: subscription cloud services, physical devices and services

Wipro

Bengaluru, India

Wipro, the eldest of data center service providers, was founded in 1945, right after World War II ended. It operates in a more diverse range of markets than many others, because it offers managed services, business automation and home automation—services that competitors do not necessarily offer. The India-based company generated revenue of about $9 billion in 2019. During the next three years, the company is planning to go all-digital, with the CEO saying that 100 percent of the company’s resources are being allocated to the digital operations goal.

Key values/differentiators:

  • Wipro helps customers do business more effectively by using Wipro industry-wide experience, deep technology expertise, comprehensive portfolio of services and vertically aligned business model. Wipro has more than 50 dedicated emerging technologies Centers of Excellence to enable customers to harness the latest technology for delivering business capability to our clients.
  • Wipro Ltd. has 160,000+ workforce serving clients in 175+ cities across six continents. The company posts revenues of about $9 billion yearly.
  • Wipro is globally recognized for its innovative approach toward delivering business value and its commitment to sustainability. Wipro champions optimized utilization of natural resources, capital and talent.
  • Wipro Technologies was recently assessed at Level 5 for CMMI V 1.2 across offshore and onsite development centers.

Who uses it: Midrange to large enterprises
How it works: subscription cloud services, physical devices and services

Datapipe

Jersey City, N.J.

Datapipe is a smaller MSP that offers managed hosting services and data centers for cloud computing and IT companies. The company, founded in 1998, offers a single-provider solution for managing and securing mission-critical IT services, including cloud computing, infrastructure as a service, platform as a service, co-location and data centers. Datapipe delivers those services from the world’s most influential technical and financial markets, including New York metro, Silicon Valley, London, Hong Kong and Shanghai.

Key values/differentiators:

  • Datapipe provides services to a range of vertical industries, including financial services, health care and pharmaceutical, manufacturing and distribution, state and federal governments, publishing, media and communications, business services, public sector, technology and software
  • Proactive project consultation and quick project engagement are good factors
  • Datapipe was named to Gartner’s 2010 Magic Quadrant for Cloud Infrastructure as a Service and Web Hosting.

Who uses it: SMBs to large enterprises
How it works: subscription cloud services, physical devices and services

Equinix

Redwood City, Calif.

Equinix provides data center services for companies, businesses, and organizations. It offers a software application platform designed for digital businesses that help its users to connect to their customers, employees, and partners. The company was founded in 1998. Equinix describes itself as “the world’s digital infrastructure company.” Digital leaders deploy its platform to bring together and interconnect the foundational infrastructure that powers their success.

Key values/differentiators:

  • Equinix is rapidly becoming one of the world’s go-to leaders in the data center interconnect space, which is growing steadily.
  • Equinix enables its customers to access all the correct places, partners and possibilities they need to accelerate business advantage in technology.
  • In the United States, Equinix operates data centers in Atlanta, Boston, Chicago, Dallas, Denver, Los Angeles, Miami, New York, Philadelphia, Seattle, Silicon Valley and Washington, D.C.

Who uses it: SMBs to large enterprises
How it works: subscription cloud service, physical devices and services

——————————————————————-

Honorable mentions:  AT&T, Cisco Systems, HCL, Capgemini

The post Top Data Center Managed Services Vendors for 2022 appeared first on eWEEK.

]]>
https://www.eweek.com/cloud/top-data-center-managed-services-vendors-for-2021/feed/ 0
Top CASB Solutions 2022: Cloud Access Security Brokers https://www.eweek.com/security/top-cloud-access-security-vendors-for-2021/ https://www.eweek.com/security/top-cloud-access-security-vendors-for-2021/#respond Fri, 25 Dec 2020 01:29:00 +0000 https://www.eweek.com/uncategorized/top-cloud-access-security-vendors-for-2021/ Cloud computing has not only altered the way the world does business, but it also has brought new and often challenging enterprise security requirements. As a result, a whole new sector of IT has sprung up in the last decade focused on cloud-access security–acronymed CASB (for cloud access security brokers)–which requires very different skills and […]

The post Top CASB Solutions 2022: Cloud Access Security Brokers appeared first on eWEEK.

]]>
Cloud computing has not only altered the way the world does business, but it also has brought new and often challenging enterprise security requirements. As a result, a whole new sector of IT has sprung up in the last decade focused on cloud-access security–acronymed CASB (for cloud access security brokers)–which requires very different skills and tools from data center or campus-centric security.

Going into 2021, there are few areas of security that are more important to businesses, government, the military, consumers or the scientific sector than CASB. This is because all of use are doing the majority of our work in cloud-based applications.

As enterprises adopt new services, applications and methods to manage data, the need to address changing data models and threat risks is essential. Organizations must address an array of issues that revolve around collaborative web applications, data flow, network designs, cloud infrastructure and other key areas.

Although major cloud providers typically offer robust built-in protections—including strong authentication, encryption and malware detection—there are often gaps in protection that result when organizations rely on multiple cloud service providers, different network topologies and numerous applications. These risks often involve key areas such as web application firewalls (WAFs), secure web gateways (SWGs) and data loss prevention (DLP).

Cloud access security brokers (CASB) take aim at this issue. “They deliver differentiated, cloud-specific capabilities generally not available as features in other security controls,” a recent industry report from Gartner Research said. “CASB vendors understand that for cloud services the protection target is different: it’s still your data but processed and stored in systems that belong to someone else.” Consequently, CASBs store policy management information and governance details across multiple cloud services. This delivers granular visibility and stronger controls. Gartner predicts that by 2022, 60 percent of large enterprises will use a CASB to govern cloud services, up from 20 percent today.

Here’s a look at 10 of the top vendors in the cloud security space. These ratings were curated with data and reviews from Gartner Peer Insights, G2 Crowd and IT Central.

Top CASB Solutions

Cisco Systems Cloudlock

Security Package: Cisco Cloudlock

Value proposition for potential buyers: Since Cisco Systems acquired Cloudlock four years ago, it has worked hard to incorporate the company into its portfolio of cloud-based products. The CASB solution offers a number of powerful capabilities, including the ability to configure policies dynamically and aggregate users into specific groups, based on real-time actions and behavior.

Key values/differentiators:

  • Cloudlock can also constrain user behavior, thus providing a powerful form of adaptive access control. In addition, it provides powerful controls, based on OAuth, that can override permissions and block certain types of cloud attacks.
  • A strong API framework helps organizations extend controls to SaaS applications that do not include native support for these and other features.

To Take Under Advisement:

  • One of the drawbacks to the approach Cloudlock takes is that all these features and controls are based on sanctioned applications that provide APIs. Cisco also offers no support for CSPMs. Users rate the platform as easy-to-implement, powerful and highly scalable.

Who uses it: Medium to large enterprises
How it works: subscription cloud service and on-prem options


Palo Alto Networks

Security Package: Palo Alto Aperture

Value proposition for potential buyers: Palo Alto Networks acquired CirroSecure in 2015 and has since relaunched the solution to include more focused cloud security tools. The 2020 solution is heavily focused on discovery along with SaaS policy and security management. Aperture includes strong data classification and monitoring tools, DLP, user activity tracking, known and unknown malware protection and detailed risk and usage reporting.

Key values/differentiators:

  • Palo Alto Networks is considered a niche player in the CASB space. Users say that Aperture is an excellent product with strong functionality, though it lacks some desirable features.
  • Among its strengths is an ability to identify SaaS and non-SaaS web applications that can be used to exfiltrate data.
  • It also delivers comparisons to multiple industry baselines and it suggests configuration changes to improve compliance.
  • Users rate the company’s support high.

To Take Under Advisement:

  • Cautions include configuration complexity and a lack of functionality in a few key areas, including reverse-proxy inspections.

Who uses it: Midrange and large enterprises
How it works: subscription cloud service and on-premises servers


CipherCloud

Security Package: CipherCloud CASB+

Value proposition for potential buyers: CipherCloud is one of the more respected young cloud security companies on the scene. Encryption and tokenization are key elements of cloud security. CipherCloud, which has offered a CASB solution since 2011, places a heavy emphasis on data protection through cloud-native security and compliance across SaaS, PaaS and IaaS platforms. The solution offers robust cloud-based visibility and controls—extending to applications running in the cloud—and it can manage both structured and unstructured data.

Key values/differentiators:

  • One of the biggest strengths of the solution is an ability to encrypt data before delivering it to SaaS applications—while preserving partial application functionality.
  • The solution manages keys for SaaS-native encryption mechanisms in the CipherCloud or a KMIP-compliant key management server.

To Take Under Advisement:

  • Potential weakness includes adaptive access controls and continuous risk assessment tools that trail competitors, Gartner noted. It positioned the company between a visionary and leader in its Magic Quadrant.
  • Some adopters rate the product a bit difficult to use and say it’s a bit pricy. Overall ratings are extremely high.

Who uses it: small to large enterprises
How it works: subscription cloud service


Microsoft

Security Package: Microsoft Cloud App Security (MCAS)

Value proposition for potential buyers: Microsoft’s addition of Adallom in 2015 broadened the company’s security offerings in a big way. MCAS offers a reverse-proxy-plus-API CASB that can operate independently or part of Microsoft’s Enterprise Mobility + Security (EMS) suite. This includes tools for Azure and other applications and components. The solution also includes threat protections and sophisticated analytics.

Key values/differentiators:

  • Gartner positions the company in the “challenger” quadrant, while users say that while it can be a bit tricky to implement, it delivers powerful features and strong protections.
  • Gartner describes the interface as “intuitive” and says that the solution handles complex policies using a visual editor. This makes the process simpler by eliminating scripting and programming.
  • It also offers suggestions and hints that can guide an organization to more robust cloud security.
  • Finally, it delivers strong automation, particularly around watermarking and encryption.

Who uses it: Personal to SMBs to midrange enterprises
How it works: subscription cloud service


Forcepoint

Security Package: Forcepoint CASB

Value proposition for potential buyers: Identifying shadow IT, preventing compromised accounts and ensuring secure mobile access to cloud apps covers a broad expanse of enterprise security requirements. Clouds ratchet up the challenges exponentially. Forcepoint CASB focuses on these issues.

Key values/differentiators:

  • It delivers a broad package of security products that revolve around secure web gateways, email security, user and entity behavior analytics, DLP and data security, and imposing a network firewall.
  • The solution delivers a powerful engine that meshes with workflows and enterprise policies. It also offers risk scoring, anomaly detection, strong analytics and metrics tools, real-time oversight and powerful application governance.
  • The focus is heavily tilted toward business applications.

To Take Under Advisement:

  • One of the key cautions for adopting the platform revolve around an inability to configure control policies toward preferred SaaS applications. Users describe the solution as powerful, granular and highly flexible. Gartner rates it in the middle of its quadrant.

Who uses it: SMBs to midrange enterprises
How it works: subscription cloud service and on-prem options


McAfee

Security Package: McAfee MVISION Cloud

Value proposition for potential buyers: McAfee is one of the best-known and utilized security solutions in the world, in several categories that inlcude both B2B and consumer markets. The company, owned for a few years by Intel but back to being independent, acquired Skyhigh Networks in January 2018. The solution bolstered the company’s existing portfolio of DLP, SWG and network sandboxing technologies.

Key values/differentiators:

  • McAfee’s strengths lie in its powerful dashboard, high level of configurability and flexibility, real-time capabilities, and strong DLP controls.
  • Gartner notes: “McAfee offers extensive CSPM capabilities that exceed those of even some pure CSPM vendors. It includes strong auditing and compliance scanning plus multiple options for automatic and guided manual remediation.”
  • Users give the solution high marks and say it provides strong controls, particularly in finding shadow IT.

To Take Under Advisement:

  • Potential drawbacks include: the ability to configure error messages for specific users and gaps in certain types of notifications, particularly involving real-time APIs. Gartner ranks the solution among the leaders.

Who uses it: SMBs, midrange, large enterprises
How it works: subscription cloud service


Bitglass

Security Package: Bitglass Next-Gen CASB

Value proposition for potential buyers: Bitglass runs natively from the cloud but it also can be deployed as a Docker container that serves as a host on-premises. The vendor has emerged as a leader in the CASB space by introducing a zero-day approach heavily tilted toward trust ratings, trust levels and at-rest encryption that’s tightly integrated with enterprise compliance and governance requirements.

Key values/differentiators:

  • The platform, which extends to mobile security and shadow IT controls, is powered by an agentless “AJAX Virtual Machine (VM)” abstraction layer transparently embedded within a user’s browser to support real-time data protection in specific scenarios, including unmanaged devices.
  • Bitglass CASB features an automated learning mode, digital watermarks, and strong data loss prevention.
  • On the downside, Gartner points out that the solution isn’t able to modify SaaS application native security controls and it is limited in its ability to assign and consumer Azure Information Protection templates. Overall, Gartner rated Bitglass a leader in its 2018 Magic Quadrant ratings. Users say that the solution is intuitive and offers powerful capabilities.

Who uses it: mid-range to large enterprises
How it works: subscription cloud service with container option


Netskope

Security Package: Netskope Security Cloud

Value proposition for potential buyers: Netskope remains an independent company in a space where major software and networking companies are scooping up CASB solution providers. The company has been shipping products since late 2013. The company focuses heavily on application discovery and SaaS security posture assessments.

Key values/differentiators:

  • Among its strengths are strong analytics tools, including behavioral analytics, and a robust alert system. This, among other things, helps Netskope spot vulnerabilities in APIs, mobile devices and shadow IT.
  • Gartner labeled the company a leader in its 2018 Magic Quadrant.
  • Users report that the solution offers strong visibility, powerful DLP features and excellent threat intelligence feeds.

To Take Under Advisement:

  • Complaints revolve around difficulties configuring agents and a limited ability to use APIs for remediation. Many CASB vendors now incorporate APIs for posture assessment as well.

Who uses it: SMBs, midrange, large enterprises
How it works: subscription cloud service


Oracle

Security Package: Oracle Cloud Access Security Broker (CASB) Cloud Service

Value proposition for potential buyers: Oracle has moved beyond a one-solution-fits-all approach to CASB. Its solution, originally Palerra, offers discovery and deep visibility into SaaS applications using a log-based approach that revolves around cloud activity. This helps the solution identify risky applications installed through Oracle, Salesforce and other platforms. The result is strong security monitoring, threat protection and incident response. Organizations can also license Inline DLP (for real-time detection) and API DLP (for retroactive scanning).

Key values/differentiators:

  • One of Oracle CASB’s strengths is a high level of flexibility, including the ability to expand detection to new content easily. In addition, custom applications running in the Java Virtual Machine (JVM) require no further action. They are automatically protected.
  • Finally, Oracle CASB monitors for misconfigurations and notify users when a problem may be present—and when the organization doesn’t match industry benchmarks.
  • Oracle landed as a challenger on its way to becoming a leader in Gartner’s MQ. Users praise the platform for easy integration and strong protection capabilities but say it can prove difficult to fully integrate across a portfolio of cloud solutions.

Who uses it: large enterprises
How it works: subscription cloud service and on-premises servers


Symantec

Security Package: Symantec Cloud Data Protection

Value proposition for potential buyers: Strong cloud security requires an array of features. Symantec delivers strong capabilities through its Cloud Data Protection platform, which incorporates products formerly offered by Blue Coat. The focus is on tokenizing or encrypting data stored in SaaS applications. The platform achieves a high level of protection through log analysis and traffic inspection. It provides cloud security assessment ratings by plugging in user behavior analytics, cloud usage patterns, malware analysis and cloud application discovery.

Key values/differentiators:

  • Strengths include: strong reporting capabilities, alerts for policy violations, highly adaptive access controls and a wide range of predefined DLP selectors.
  • Symantec is among the leaders in the Gartner MQ. Users say the platform delivers strong and mature capabilities.

Who uses it: Small,midrange and large enterprises
How it works: subscription cloud service

The post Top CASB Solutions 2022: Cloud Access Security Brokers appeared first on eWEEK.

]]>
https://www.eweek.com/security/top-cloud-access-security-vendors-for-2021/feed/ 0
Lenovo Shows New Data Management Solutions for Hybrid Cloud, AI https://www.eweek.com/pc-hardware/lenovo-shows-new-data-management-solutions-for-hybrid-cloud-ai/ https://www.eweek.com/pc-hardware/lenovo-shows-new-data-management-solutions-for-hybrid-cloud-ai/#respond Thu, 10 Dec 2020 11:11:00 +0000 https://www.eweek.com/uncategorized/lenovo-shows-new-data-management-solutions-for-hybrid-cloud-ai/ Computing is and always has been about data, the “information” in IT. That is especially true in business organizations, where the earliest computing solutions focused on speeding and simplifying financial transactions and similar processes. Evolving technologies enable companies to access, manage, gain insights and profit from new forms of data, often gathered or created in […]

The post Lenovo Shows New Data Management Solutions for Hybrid Cloud, AI appeared first on eWEEK.

]]>
Computing is and always has been about data, the “information” in IT. That is especially true in business organizations, where the earliest computing solutions focused on speeding and simplifying financial transactions and similar processes. Evolving technologies enable companies to access, manage, gain insights and profit from new forms of data, often gathered or created in remote locales. But doing so requires IT vendors to develop more robust and sophisticated tools for managing and analyzing that information.

These points underscore the value of the new storage systems, monitoring tools and management capabilities that Lenovo’s Data Center Group (DCG) recently announced. Working alone and with strategic partners, Lenovo has considerably expanded its business customers’ options for working with hybrid cloud, analytics and artificial intelligence (AI). Let’s look at that more closely.

Data unification from edge to core to cloud

Business IT solutions have long supported location-specific use cases, including remote or branch offices (ROBOs) and public cloud platform services. However, the continuing growth in both the volume and varieties of information that companies work with tend to strain traditional IT offerings.

That is the case for the majority of organizations that choose to implement hybrid IT environments that access multiple cloud platforms. It is also true for newer, still evolving use cases, like edge computing which are expected to grow significantly with the introduction of robust 5G wireless technologies and networks.

Cohesively accessing and managing far flung data assets is challenging for most enterprises but is especially problematic for smaller to medium sized enterprises (SMEs). Those same organizations also face notable challenges when it comes to effectively analyzing ever-expanding information resources.

Lenovo DCG’s new solutions

These are some of the issues that Lenovo has addressed with its new data management offerings. They include,

  • Lenovo ThinkSystem DM5100F, a new member of the company’s DM Series family, is an affordable all-NVMe storage system that offers high-performance low-latency features. DM Series systems now include S3 Object support, allow customers to manage and analyze all data types (block, file and object) within a single storage platform and perform cost-effective data analytics across data resources. The DMF 5100F also supports enhanced data protection capabilities, such as transparent failover and native object storage management. Additionally, Lenovo DM Series customers can add features, like cold-data tiering from hard drives to the cloud and data replication to the cloud, thus enabling a multi-cloud strategy for storage while reducing data management costs.
  • Jolera, a multinational service provider, offered a testimonial describing how it has boosted storage deduplication rations from 3:1 to 4:1 with built-in tools included in Lenovo’s DM Series storage solutions and DE Series servers.
  • Lenovo DB720S Fibre Channel Switch is a new generation offering for 32Gbps and 64Gbps storage networking, delivering higher speed and 50 percent lower latency than previous solutions. The DB720S also supports autonomous SAN infrastructures with self-learning, self-optimizing and self-healing capabilities, reducing downtime and simplifying storage network management.
  • Lenovo ThinkSystem Intelligent Monitoring 2.0 software solution is a cloud-based management platform that simplifies and automates Lenovo ThinkSystem storage environments with AI-based tools and processes. The solution offers a single cloud-based interface to monitor and manage capacity and performance for multiple locations, predict issues before they happen and offer storage administrators prescriptive guidance.
  • Reference Architecture for an AI training system is the result of a collaboration between Lenovo, NVIDIA and NetApp. The resulting entry-level solution is designed for enabling small and medium sized IT teams where most compute jobs are single node (single or multi-GPU) or distributed over a few computational nodes.

Final analysis

A continuing truth about business computing is that companies of every size and kind can benefit from ever more powerful and capacious IT solutions, some organizations lag or are simply left behind while others forge ahead. There are numerous reasons for these discrepancies, including lack of access to capital or experienced IT staff.

However, the best vendors are those who assist all sorts of organizations by developing solutions that can be used to address a wide variety of business workloads and use cases. That approach is clear in Lenovo’s new data management solutions which are designed to cost-effectively enhance storage performance while optimally supporting the access and analysis of business information wherever it resides, including hybrid cloud environments. In addition, the company’s collaboration with NVIDIA and NetApp is designed to ensure that even small and medium sized IT teams have access to powerful AI training tools and methodologies.

No business technology or IT vendor can guarantee that customers will succeed. However, Lenovo DCG’s new solutions underscore the company’s intention to provide its customers the computing solutions they and their businesses require.

Charles King is a principal analyst at PUND-IT and a regular contributor to eWEEK.  © 2020 Pund-IT, Inc. All rights reserved.

The post Lenovo Shows New Data Management Solutions for Hybrid Cloud, AI appeared first on eWEEK.

]]>
https://www.eweek.com/pc-hardware/lenovo-shows-new-data-management-solutions-for-hybrid-cloud-ai/feed/ 0
Lenovo DCG and DreamWorks: Tech Innovation Meets Real-World Experience https://www.eweek.com/pc-hardware/lenovo-dcg-and-dreamworks-tech-innovation-meets-real-world-experience/ https://www.eweek.com/pc-hardware/lenovo-dcg-and-dreamworks-tech-innovation-meets-real-world-experience/#respond Wed, 02 Dec 2020 23:31:00 +0000 https://www.eweek.com/uncategorized/lenovo-dcg-and-dreamworks-tech-innovation-meets-real-world-experience/ Cementing relationships with well-known companies is something most every IT vendor hopes to achieve. That is hardly surprising, since doing business with organizations that are household names suggests that vendors are doing something right. Often that assumption is entirely correct, since successful companies can pick whom they want to do business with and typically choose […]

The post Lenovo DCG and DreamWorks: Tech Innovation Meets Real-World Experience appeared first on eWEEK.

]]>
Cementing relationships with well-known companies is something most every IT vendor hopes to achieve. That is hardly surprising, since doing business with organizations that are household names suggests that vendors are doing something right. Often that assumption is entirely correct, since successful companies can pick whom they want to do business with and typically choose vendors whose products best fit their needs.

Landing a client like this and keeping the relationship on track qualifies as a big deal but so does being supplanted by a powerful rival. At Lenovo’s recent Tech World conference, the company announced that animation leader DreamWorks Animation had chosen Lenovo’s Data Center Group (DCG) to update its legacy data center. DreamWorks had a longstanding strategic relationship with data center vendor HPE.

Let’s consider what likely drove DreamWorks’ decision and why Lenovo is the right vendor for the job.

Links between Hollywood and Silicon Valley

The technology and entertainment industries have been linked at the hip for well over three decades, with computer-generated imagery (CGI) making a giant leap into mainstream films in 1991. That was the year that audiences flocked to James Cameron‘s Terminator 2: Judgment Day featuring the awesomely liquid metal T-1000 killer robot and Disney‘s Beauty and the Beast, the second traditional 2D animated film to be entirely made using CAPS (computer animation production system) technologies. Since then, CGI and CAPS have dominated mainstream films—the 50 highest-grossing movies of the past decade virtually all utilized or depended entirely on CGI or CAPS.

While a tiny handful of IT vendors (notably, Silicon Graphics/SGI) dominated early productions with proprietary solutions and tools, the shift toward industry-standard components and systems sparked fundamental changes. That was due in large part to synergies between graphics rendering processes and high-performance computing (HPC) technologies, the end result being the emergence of Intel-based systems vendors as major players and partners in CGI and CAPS production.

Those included the 2002 strategic alliance between HP (now HPE and HP Inc.) and DreamWorks, which followed the companies’ collaboration on DreamWorks’ Shrek and continued through other DreamWorks franchises, including Kung Fu Panda, How to Train Your Dragon, Trolls and The Croods. The alliance survived HP’s 2015 split into separate client/printing and data center companies. While remaining partnered with HP Inc., when DreamWorks began planning to upgrade its rendering data center, it turned to Lenovo DCG.

Why DreamWorks chose Lenovo

What led to the deal? While few details about the project’s size and scope are available, it’s reasonable to assume that DreamWorks was attracted to Lenovo’s deep experience in high-performance computing (HPC), the company’s innovative system designs and technologies and its global supply chain prowess.

As was noted in the story detailing the agreement, HPC is vitally important in digital content creation where “producing a computer-generated animated feature typically takes four years with hundreds of artists and engineers working in tandem to create half a billion digital files that require 200 million compute hours (22,000 compute years) to render.”

Lenovo is deeply experienced in all phases of HPC both at the highest levels of supercomputing-assisted research and in a broad range of commercial and industrial applications. The company has earned more places on the Top500.org list of best performing supercomputers than any other vendor since pushing HPE out of the top spot in June 2018.

In other words, it is hard to think of a better partner to help develop and deploy a world-class HPC cluster. Additionally, the experience Lenovo gained working with high-end supercomputing customers, including the Leibniz Computing Center, Cineca and the Barcelona Supercomputing Center, has informed and inspired Lenovo’s commercial HPC solutions, including the ThinkSystem SR670, ThinkSystem SD530 and ThinkSystem SD650.

The ThinkSystem SD650 also features Lenovo Neptune, a notable liquid-cooling technology that the company says can deliver up to a 40% savings in data center energy expenses or help customers pack significantly more compute power into a smaller space. Those points were especially important to DreamWorks, which runs its data center at a high utilization rate (currently 98%) and wanted to avoid expanding the footprint of its rendering facility.

Finally, the complexities of the DreamWorks project, along with challenges caused by the Covid-19 pandemic, required high levels of design, development and deployment expertise. Lenovo worked with DreamWorks contractors to integrate the plumbing and cooling systems so the systems could quickly go live and start adding value. Lenovo’s logistics team leveraged the company’s global supply chain, pre-ordering components with long lead times, staging them in Europe so they would be available as needed, and working with global suppliers to ship the systems and synchronize their arrival.

According to Skottie Miller, a Technology Fellow at DreamWorks Animation: “It was a beautifully orchestrated logistical masterpiece. I was joking that I couldn’t buy a roll of toilet paper during the pandemic, but I could buy and install a supercomputer.”

Final analysis

IT vendors like to focus on the value of marketing-leading performance and new technological innovations. However, having the experience to understand a customer’s business needs and the flexibility to deliver and deploy new solutions as they are required are equally important. DreamWorks Animation’s effort to update its rendering data center is an example of how, with a partner such as Lenovo DCG, an organization can enjoy or address all these issues and be ready to pursue ever-greater achievements.

Charles King is a principal analyst at PUND-IT and a regular contributor to eWEEK.  © 2020 Pund-IT, Inc. All rights reserved.

The post Lenovo DCG and DreamWorks: Tech Innovation Meets Real-World Experience appeared first on eWEEK.

]]>
https://www.eweek.com/pc-hardware/lenovo-dcg-and-dreamworks-tech-innovation-meets-real-world-experience/feed/ 0
How NVIDIA A100 Station Brings Data Center Heft to Workgroups https://www.eweek.com/big-data-and-analytics/how-nvidia-a100-station-brings-data-center-heft-to-workgroups/ https://www.eweek.com/big-data-and-analytics/how-nvidia-a100-station-brings-data-center-heft-to-workgroups/#respond Thu, 19 Nov 2020 03:55:00 +0000 https://www.eweek.com/uncategorized/how-nvidia-a100-station-brings-data-center-heft-to-workgroups/ There’s little debate that graphics processor unit manufacturer NVIDIA is the de facto standard when it comes to providing silicon to power machine learning (ML) and artificial intelligence (AI) based systems. As important as Intel was to general-purpose computing, NVIDIA is the same to accelerated computing. Its GPUs can be found in everything from big-data […]

The post How NVIDIA A100 Station Brings Data Center Heft to Workgroups appeared first on eWEEK.

]]>
There’s little debate that graphics processor unit manufacturer NVIDIA is the de facto standard when it comes to providing silicon to power machine learning (ML) and artificial intelligence (AI) based systems. As important as Intel was to general-purpose computing, NVIDIA is the same to accelerated computing. Its GPUs can be found in everything from big-data center systems to automobiles to desktop video devices–even consumer endpoints.

NVIDIA is best known for GPUs but also makes systems

An emerging part of NVIDIA’s business is the systems group, where it makes full-functioning, turnkey servers and desktop PCs for accelerated computing. An example of this is the NVIDIA DGX Server line which is a set of engineered systems specifically built for the rigors of AI/ML. This week at the digital Supercomputing show, NVIDIA announced the latest member of its DGX family with the DGX A100 Station.

This “workstation” is a beast of a computer and features four of the recently announced A100 GPUs. These GPUs were designed for data centers and come with either 40 GB or 80 GB of GPU memory, giving the workstation up to 320 GB of GPU memory for data scientists to infer, learn and analyze with. DGX A100 Station has a whopping 2.5 petaflops of AI performance and features NVIDIA’s NVLink as the high-performance backbone to connect the GPUs with no inter-chip latency creating effectively one, massive GPU.

MIG enable workgroups to leverage a single system

I put the term “workstation” in quotes because it’s really a workstation in form factor only; even at 2.5 FLOPS compared to the 5 that the A100 Server has, it’s still a beast of a machine. The benefit of the DGX Station is that it brings AI/ML out of the data center and allows workgroups to plug it in and run it anywhere. The workstation is the only workgroup server I’m aware of that supports NVIDIA’s Multi-Instance GPU (MIG) technology. With MIG, the GPUs on the A100 can be virtualized so a single workstation can provide 28 GPU instances to run parallel jobs and support multiple users, without impacting system performance.

As mentioned previously, the workstation form factor makes the A100 Station ideal for workgroups and can be procured directly by the lines of business. Juxtapose this with the A100 Server, which is deployed into a data center and typically purchased and managed by the IT organization. Most line-of-business individuals, such as data scientists, don’t have the technical acumen or even the data center access to purchase a server, rack and stack it, connect it to the network and do the IT things that need to be done to keep it running.

A100 Station is designed for simplicity

The A100 Station looks like a big computer. It sits upright on or under a desk and simply requires the user to plug the power cord and network in. The simple design makes it perfect for agile data science teams who work in a lab, a traditional office or even at home. DGX Station was designed for simplicity and does not require any IT support or advanced technical skills. My first job out of college was working with a group of data scientists as an IT person, and I can attest to the importance of simplicity with that audience.

Without something like A100 that was purpose-built for accelerated computing, workgroups would be forced to purchase CPU-based desktop servers which are severely underpowered for this kind of use case. Sure, the average Intel-based workgroup server can run Word and Google Docs, but it can take months to run AI-based analytic models? With the GPU-powered systems, what took months can typically be done in just a few hours or even minutes.

Although NVIDIA didn’t announce a price for the DGX A100 Station, I’m guessing it’s approaching six figures and that might seem high for a workstation. But considering the compensation level of data scientists, keeping them working versus sitting around waiting for models to run on CPU systems, that cost is a bargain. If one factors in the lost opportunity costs of not having an AI/ML optimized system, it makes the Station a no-brainer for workgroups that need this kind of compute power.

Some companies might turn all AI infrastructure over to the IT organization, and that’s a perfectly fine model. Those companies likely will leverage one of the server form factors.

For those who leave the infrastructure decisions and purchasing within the lines of business, the DGX A100 Station is ideally suited. GPU power at the desk might seem a bit sci-fi-ish, but NVIDIA announced it this week.

Zeus Kerravala is an eWEEK regular contributor and the founder and principal analyst with ZK Research. He spent 10 years at Yankee Group and prior to that held a number of corporate IT positions.

The post How NVIDIA A100 Station Brings Data Center Heft to Workgroups appeared first on eWEEK.

]]>
https://www.eweek.com/big-data-and-analytics/how-nvidia-a100-station-brings-data-center-heft-to-workgroups/feed/ 0
Perspective: Why NVIDIA+Arm Shakes Up Chip Industry https://www.eweek.com/pc-hardware/nvidia-acquires-arm-to-shake-up-chip-industry/ https://www.eweek.com/pc-hardware/nvidia-acquires-arm-to-shake-up-chip-industry/#respond Mon, 14 Sep 2020 10:01:00 +0000 https://www.eweek.com/uncategorized/perspective-why-nvidiaarm-shakes-up-chip-industry/ After months of speculation, GPU king NVIDIA announced Sept. 13 that it is acquiring chip developer/designer Arm from SoftBank for $42 billion, comprising $12 billion in cash, $21.5 billion in stock, a $2 billion payment at signing, $1.5 billion in NVIDIA stock for Arm employees and an addition $5 billion payment based in Arm’s performance. […]

The post Perspective: Why NVIDIA+Arm Shakes Up Chip Industry appeared first on eWEEK.

]]>
After months of speculation, GPU king NVIDIA announced Sept. 13 that it is acquiring chip developer/designer Arm from SoftBank for $42 billion, comprising $12 billion in cash, $21.5 billion in stock, a $2 billion payment at signing, $1.5 billion in NVIDIA stock for Arm employees and an addition $5 billion payment based in Arm’s performance.

Arm will continue to operate in its Cambridge, UK, headquarters and will function as a division of the broader NVIDIA corporation. 

The enormity of this deal highlights just how massive Santa Clara, Calif.-based NVIDIA has become in a relatively short period of time. When SoftBank purchased Arm in 2016, it paid about $32 billion, and NVIDIA’s market cap was only about $30 billion. That was a mere four years ago, and NVIDIA is now worth $300 billion, or 10X its valuation back then. The company’s growth has been fueled by the demand for its graphics processing units, the main computing unit used to power accelerated computing systems, such as artificial intelligence, ray tracing, self-driving cars, super computers and a bunch of other leading-edge use cases. 

NVIDIA-Arm will be industry changing 

This deal will have a profound impact on the broader computing industry because it helps pave the way to expose Arm’s massive customer base to the power of GPU computing and can help NVIDIA build better integrated “end-to-end” systems. This becomes increasingly important as the world relies on accelerated computing to solve some of the planet’s biggest problems, such as finding a cure for COVID-19, building autonomous vehicles and doing seismic exploration. 

Arm builds CPUs, similar to Intel and AMD, with one major difference. Intel and AMD design the chips, manufacturer them and ship them to systems companies that install them onto a motherboard; Arm designs silicon and then turns the architecture over to other companies to build the chips. This enables the manufacturer to optimize the system for that processor, making it more power- and space-efficient, compared to the pluggable model of an Intel or AMD processor. 

An easy way to think of the difference is that Intel and AMD based systems are optimized in software, while Arm systems can be optimized in hardware and software, creating more efficient systems. 

Arm has been used in mobile but is rapidly expanding 

This is why Arm has been the preferred CPU for mobile devices for years and can be found in iPhones, Samsung and Qualcomm devices. But the efficiency and improved performance of Arm is just catching on. Microsoft now makes an Arm-based Surface laptop and has released Windows for Arm. Also, Apple recently announced it was planning to switch its future Mac computer systems to Arm processors. 

It’s important to understand that, as powerful as GPUs are, they aren’t great at everything. While they handle high-performance computing tasks, like video analytics and AI well, CPUs are still used to boot systems and run a variety of other processes so even the most advanced systems use a combination of CPUs and GPUs and now NVIDIA has both. This lets the company create better end-to-end designs when Arm processors are used. This is similar to the approach it is taking with data center networks with the acquisition of Mellanox. 

NVIDIA commits to leaving Arm open 

One of the important aspects of this announcement is that on the acquisition call, NVIDIA CEO Jensen Huang made it crystal clear that Arm will continue to operate with the current open licensing model while maintaining its customer neutral go to market approach. 

The open approach has made Arm the company it is today, and NVIDIA won’t disrupt that.  However, that doesn’t mean it’s the only approach the company must take. This acquisition opens the door for NVIDIA to take Arm’s design and build the CPUs, similar to the way it does GPUs.  Or it could license its GPU design using Arm’s model. Is either better? Not really; it depends on what the customer wants, and I believe there is a market for both and that opens more doors for both companies, particularly NVIDIA, when we are just at the start of the GPU-in-everything cycle. 

On the analyst and press call Sept. 13, Huang did a good job of outlining the size and scope of Arm compared to NVIDIA. He stated that last year alone, Arm shipped 22 billion chips. In the same time frame, NVIDIA shipped about 100 million. The latter is a nice number, but it’s orders of magnitude smaller than Arm. The reason is that NVIDIA has historically served very select markets, such as supercomputers, self-driving cars and gaming systems. Now it has exposure to the entire Arm pie. 

Similarly, NVIDIA can actively look to use the performance and power efficiency in many of its GPU systems. One low-hanging fruit use case is edge AI systems, where space is limited as is power, because many systems operate on batteries, but these systems need to perform complex tasks. 

Another blow to Intel 

This acquisition also will be another nail in the coffin of rival Intel. For years, NVIDIA was considered a niche gaming company (which it was), while Intel was the king of silicon (which was also true), but over time and through a number of good decisions by NVIDIA and Intel’s inability to build a GPU, NVIDIA continued to grow while Intel flat-lined. In July 2020, NVIDIA caught Intel with respect to market cap, and both were worth about $250 billion. Today, Intel has slipped to $209 billion, and NVIDIA is at about $300 billion. The acquisition of Arm will help NVIDIA accelerate the replacement cycle of Intel to Arm by building better-optimized systems in which both CPUs and GPUs are needed. 

I believe AI is the single biggest disruptive technology to happen since the birth of computing. Speech analytics, video recognition, translation, automation and more will become standard across almost all devices–both consumer and business–and increase the scope of where GPUs are needed.

With the acquisition of Arm, NVIDIA is now able to offer its customers greater flexibility in how things are designed with improved performance. This is a well-timed acquisition by the company, because we are just hitting that inflection point. 

Zeus Kerravala is an eWEEK regular contributor and the founder and principal analyst with ZK Research. He spent 10 years at Yankee Group and prior to that held a number of corporate IT positions.

The post Perspective: Why NVIDIA+Arm Shakes Up Chip Industry appeared first on eWEEK.

]]>
https://www.eweek.com/pc-hardware/nvidia-acquires-arm-to-shake-up-chip-industry/feed/ 0