Virtualization Archives | eWEEK https://www.eweek.com/virtualization/ Technology News, Tech Product Reviews, Research and Enterprise Analysis Tue, 07 Jun 2022 15:12:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.2 ManageXR CEO Luke Wilson on Enterprise VR Management https://www.eweek.com/innovation/managexr-ceo-luke-wilson-on-enterprise-vr-management/ Fri, 01 Apr 2022 20:37:49 +0000 https://www.eweek.com/?p=220745 I spoke with Luke Wilson, Founder and CEO of ManageXR, about the state of the virtual reality market, and how VR can support training and collaboration in enterprise settings. Among the topics we discussed: What’s the state of the enterprise VR market? What are a couple of key trends you see in 2022? What’s a […]

The post ManageXR CEO Luke Wilson on Enterprise VR Management appeared first on eWEEK.

]]>
I spoke with Luke Wilson, Founder and CEO of ManageXR, about the state of the virtual reality market, and how VR can support training and collaboration in enterprise settings.

Among the topics we discussed:

  • What’s the state of the enterprise VR market? What are a couple of key trends you see in 2022?
  • What’s a common problem that companies face with deploying enterprise VR? Any advice you would give?
  • How is Manage XR addressing the needs of the market? What’s your advantage?
  • The future of enterprise VR? When do you expect mainstream adoption?

Listen to the podcast:

Also available on Apple Podcasts

Watch the video:

The post ManageXR CEO Luke Wilson on Enterprise VR Management appeared first on eWEEK.

]]>
How CrowdStorage Built an Affordable Alternative to Amazon S3 https://www.eweek.com/storage/how-crowdstorage-built-an-affordable-alternative-to-amazon-s3/ Thu, 03 Jun 2021 19:24:01 +0000 https://www.eweek.com/?p=219031 On premise data storage is a lot like closet space: one can never seem to have enough! However, the arrival of cloud-based storage solutions has changed the dynamic. More storage is just a few clicks away, making data storage today more like a long term storage facility, where you pay for the space needed and […]

The post How CrowdStorage Built an Affordable Alternative to Amazon S3 appeared first on eWEEK.

]]>
On premise data storage is a lot like closet space: one can never seem to have enough! However, the arrival of cloud-based storage solutions has changed the dynamic. More storage is just a few clicks away, making data storage today more like a long term storage facility, where you pay for the space needed and the amount of time you need that space.

However, there are a few caveats with that analogy, especially when it comes to calculating costs. Most storage as a service solutions seem to have some hidden costs associated with them. Typically, most users not only pay for a given amount of storage space, they also pay to access that data. Imagine that whenever you wanted to access something stored in a storage facility, you would have to pay a fee on top of the agreed upon rent, even if you were just to take something out of storage.

Typically, cloud storage vendors charge for storage space, as well as accessing the data, using something called egress fees or charging for API requests. Case in point is Amazon S3, where users are charged per GB and then charged for data retrieval requests or other types of data access. Adding insult to injury is the fact that Amazon S3 also uses a somewhat complex formula to calculate those fees, making it difficult to budget storage and access costs.

To its credit, Amazon S3 offers compatibility with numerous applications and services, making it quite simple for applications and services to use Amazon S3 as a primary method for storing and accessing data. It is those support and compatibility issues that drive many organizations to default to S3, despite concerns about costs.

Cloud storage vendor CrowdStorage offers a different take on the cloud storage cost conundrum with its Polycloud object storage service, which offers S3 compatible API and set pricing, without any hidden fees.

A Closer Look at Polycloud

CrowdStorage built Polycloud with several objectives in mind. The first of which was to build an alternative to existing cloud storage offerings, such as Amazon S3, Microsoft Azure Cloud Storage Services, and Google Cloud Storage. Other objectives focused on affordability, compatibility, and ease of use.

However, one primary goal was to establish a platform that could meet future needs as well as bring additional innovation into the cloud storage picture.

For example, the company has designed a method to store small chunks of data across multiple cloud connected storage devices, in essence creating cloud object storage that is distributed across hundreds, if not thousands of cloud connected storage devices, with data replicated across those devices.

The company has already established that cloud storage ideology for archival video files for a proprietary use case for a fortune 5000 company. That use case leverages some 250,000 storage nodes, where 60 Megabyte objects are stored as 40 shards on target devices, creating a highly resilient and secure distributed object storage network.

Hands On with Polycloud

Polycloud uses a “storage as a service paradigm,” where users can sign up for the service using a browser based form. The service is priced using a pay as you go model, where users only pay for what they use.

There are no egress or ingress fees, long term contracts or licensing charges. Current costs are roughly $4 per TB per month. CrowdStorage offers a cloud pricing calculator which compares the cost of Polycloud to other storage providers. The company also offers a “try before you buy” free membership, which includes 10GB of storage.

Once an account is established, users can access storage using a browser based interface. The browser based console is rudimentary and most users will probably only use it to setup storage buckets and upload or download files. That said, the browser based interface proves useful enough to store archival data in a bucket or other data that is not directly associated with an application, such as backup files, logs, and so forth.

Once storage buckets are established, users can leverage CrowdStorage’s S3 compatibility. The company offers integration with numerous applications and makes it is quite easy to create access keys to protect data. Integrations (via S3) are offered for numerous applications, including most of AWS SDKs, meaning that custom software developed using those SDKs can also access storage buckets.

Native S3 integrations are offered for ARQ 7 Backup, CloudBerry Explorer, Commvault, QNAP, and many other third-party applications. Integrating applications is very straightforward, users just need to define a storage location and then provide the necessary credentials. Some applications, such as ARQ 7 Backup, provide wizard-like configuration, further easing setup.

Conclusions

Currently, Polycloud’s claim to fame comes in the form of economy. In other words, CrowdStorage is offering Polycloud as a low cost option for cloud data storage, that is also S3 compatible. Those looking to significantly reduce cloud storage costs will be well served by Polycloud.

However, CrowdStorage is also evolving the Polycloud offering and will expand storage options to include a distributed storage offering, where additional security, as well as even lower costs will become available. The distributed storage model will offer increased resiliency, as well as increased uptime.

Polycloud’s distributed network combines un-utilized storage and bandwidth resources that are already deployed and connected to the internet. Each storage device on the distributed network becomes a distinct node, with a combined capacity of over 400 petabytes. The distributed network consists of nodes that are geographically dispersed and data shards are replicated across multiple nodes, increasing resiliency, while also making the data more secure, since no single file is stored on a single device.

The post How CrowdStorage Built an Affordable Alternative to Amazon S3 appeared first on eWEEK.

]]>
Google Cloud vs AWS 2022: Compare Features, Pricing, Pros & Cons https://www.eweek.com/cloud/aws-vs-google-cloud-platform/ https://www.eweek.com/cloud/aws-vs-google-cloud-platform/#respond Mon, 15 Mar 2021 20:45:00 +0000 https://www.eweek.com/uncategorized/google-cloud-vs-aws/ Selecting both primary and secondary cloud services is now a common IT strategy for most enterprises. Recent research shows that about 90 percent of enterprises and non-profit organizations are utilizing multiple cloud services accounts. This is a big change over what was happening a mere several years ago, when some companies were still reluctant to […]

The post Google Cloud vs AWS 2022: Compare Features, Pricing, Pros & Cons appeared first on eWEEK.

]]>
Selecting both primary and secondary cloud services is now a common IT strategy for most enterprises. Recent research shows that about 90 percent of enterprises and non-profit organizations are utilizing multiple cloud services accounts. This is a big change over what was happening a mere several years ago, when some companies were still reluctant to trust their business data in any cloud application. Today we offer a Google Cloud Platform vs. Amazon Web Services comparison.

Public cloud service providers such as Amazon Web Services, Microsoft Azure, Google, IBM, Dell EMC, Salesforce, Oracle and others are making it easier all the time for customers to come and go or add or subtract computing capacity or apps as needed. These and other providers also keep coming up with new and more efficient services for companies to use, many of which now feature artificial intelligence options to make them more usable for technical and non-technical employees alike.

In this article, we take a close look at two of the three largest cloud services providers in the world: Amazon Web Services and Google Cloud Platform. eWEEK uses research from several different sources, including individual analysts, TechnologyAdvice, Gartner, ITC, Capterra, IT Central Station, G2 and others.

What we’ll do here is compare at a high level and in a few different ways these two global cloud storage and computing services, so as to help you decide on the one that suits your company as the most cost- and feature-efficient one available.

Similarities and Differences of AWS vs. Google Cloud

To use an AWS service, users must sign up for an AWS account. After they have completed this process, they can launch any service under their account within Amazon’s stated limits, and these services are billed to their specific account. If needed, users can create billing accounts and then create sub-accounts that roll up to them. In this way, organizations can emulate a standard organizational billing structure.

Similarly, GCP requires users to set up a Google account to use its services. However, GCP organizes service usage by project rather than by account. In this model, users can create multiple, wholly separate projects under the same account. In an organizational setting, this model can be advantageous, allowing users to create project spaces for separate divisions or groups within a company. This model can also be useful for testing purposes: once a user is done with a project, he or she can delete the project, and all of the resources created by that project also will be deleted.

AWS and GCP both have default soft limits on their services for new accounts. These soft limits are not tied to technical limitations for a given service; instead, they are in place to help prevent fraudulent accounts from using excessive resources, and to limit risk for new users, keeping them from spending more than intended as they explore the platform. If you find that your application has outgrown these limits, AWS and GCP provide straightforward ways to get in touch with the appropriate internal teams to raise the limits on their services.

Resource management interfaces

AWS and GCP each provide a command-line interface (CLI) for interacting with the services and resources. AWS provides the Amazon CLI, and GCP provides the Cloud SDK. Each is a unified CLI for all services, and each is cross-platform, with binaries available for Windows, Linux, and macOS. In addition, in GCP, you can use the Cloud SDK in your web browser by using Google Cloud Shell.

AWS and GCP also provide web-based consoles. Each console allows users to create, manage, and monitor their resources. The console for GCP is located at https://console.cloud.google.com/.

Pricing processes are different

One area where there is not a notable difference between these two market leaders is in pricing. AWS uses a pay-as-you-go model and charges customers per hour—and they pay for a full hour, even if they use only one minute of it. Google Cloud follows a to-the-minute pricing process.

Many experts recommend that enterprises evaluate their public cloud needs on a case-by-case basis and match specific applications and workloads with the vendor that offers the best fit for their needs. Each of the leading vendors has particular strengths and weaknesses that make them a good choice for specific projects.

So, let’s get more specific.

What is Google Cloud Platform?

For the past 15 years, Google has been building one of the fastest, most powerful, and highest-quality cloud infrastructures on the planet. Internally, Google itself uses this infrastructure for several high-traffic and global-scale services, including Gmail, Maps, YouTube and Search. Because of the size and scale of these services, Google has put a lot of work into optimizing its infrastructure and creating a suite of tools and services to manage it effectively. GCP puts this infrastructure and these management resources at users’ fingertips.

Google Cloud new features for 2021

  • In July 2020, Google Cloud introduced Big Query Omni, a new multi-cloud analytics solution, powered by its hybrid and multi-cloud Anthos platform, that allows users to run the same database in multiple cloud and data center environments. The new package extends Google Cloud’s analytics platform to other public clouds without leaving the BigQuery user interface and without having to move or copy datasets. It’s available in private alpha for Amazon Web Services’ Amazon Simple Storage Service (S3), and support for Microsoft Azure is coming soon.
  • Also in July 2020, Google Cloud unveiled the first product of its confidential computing portfolio: new Confidential VMs that allow users to run workloads in Google Cloud while ensuring their data is encrypted while it’s in use and being processed, not just at rest and in transit. The solution, available in beta for Google Compute Engine, helps remove cloud adoption barriers for customers in highly regulated industries. The Confidential VMs are based on Google Cloud’s N2D series instances and leverage AMD’s Secure Encrypted Virtualization feature supported by its 2nd Gen AMD EPYC CPUs. Dedicated per-VM encryption keys are generated in hardware and are not exportable.
  • New Assured Workloads for Government, now in beta in Google’s U.S. regions, enable customers to automatically apply controls to their workloads, making it easier to meet security and compliance requirements for processing government data, including those concerning U.S. data locations and personnel access.
  • Customer to Community (C2C) is billed as an independent community where Google Cloud customers, including IT executives, developers and other cloud professionals, can connect, share and learn.Customers joining C2C, which is currently open to those in North America, Europe, the Middle East and Africa, will get access to exclusive networking opportunities, then ability to connect with other customers through virtual and in-person events, and expanded access to Google Cloud experts and content such as knowledge forums, white papers and methodologies. They’ll also receive early and exclusive access to Google Cloud product roadmaps and will be able to provide feedback and serve as customer-advisors

Defining GCP

Google Cloud was developed by Google and launched in 2008. It was written in Java, C++, Python including Ruby. It also provides the different services that are IaaS, PaaS and Serverless platform. Google cloud is categorized into different platforms, such as Google App Engine, Google Compute Engine, Google Cloud Datastore, Google Cloud Storage, Google Big Query (for analytics) and Google Cloud SQL. Google cloud platform offers high-level computing, storage, networking and databases.

It also offers different options for networking, such as virtual private cloud, cloud CDN, cloud DNS, load balancing and other optional features. It also offers management of big data and Internet of things (IoT) workloads. Cloud machine learning engine, cloud video intelligence, cloud speech API, cloud Vision API and others also utilize machine learning in Google cloud. Suffice to say there are numerous options inside Google Cloud, which is most often used by developers, as opposed to line-of-business company employees.

Google Regions and Zones

Nearly all AWS products are deployed within regions located around the world. Each region comprises a group of data centers that are in relatively close proximity to each other. Amazon divides each region into two or more availability zones. Similarly, GCP divides its service availability into regions and zones that are located around the world. For a full mapping of GCP’s global regions and zones, see Cloud Locations.

In addition, some GCP services are located at a multi-regional level rather than the more granular regional or zonal levels. These services include Google App Engine and Google Cloud Storage. Currently, the available multi-regional locations are United States, Europe and Asia.

By design, each AWS region is isolated and independent from other AWS regions. This design helps ensure that the availability of one region doesn’t affect the availability of other regions, and that services within regions remain independent of each other. Similarly, GCP’s regions are isolated from each other for availability reasons. However, GCP has built-in functionality that enables regions to synchronize data across regions according to the needs of a given GCP service.

AWS and GCP both have points of presence (POPs) located in many more locations around the world. These POP locations help cache content closer to end users. However, each platform uses their respective POP locations in different ways:

  • AWS uses POPs to provide a content delivery network (CDN) service, Amazon CloudFront.
  • GCP uses POPs to provide Google Cloud CDN (Cloud CDN) and to deliver built-in edge caching for services such as Google App Engine and Google Cloud Storage.

GCP’s points of presence connect to data centers through Google-owned fiber. This unimpeded connection means that GCP-based applications have fast, reliable access to all of the services on GCP, Google said.

Google Cloud Platform: Pros, cons based on user feedback

PROS: Users count heavily on Google’s engineering expertise. Google has an exemplary offering in application container deployments, since Google itself developed the Kubernetes app management standard that both AWS and Azure now offer. GCP specializes in high-end computing offerings such as big data, analytics and machine learning. It also provides considerable scale-out options and data load balancing; Google knows what fast data centers require and offer fast response times in all of its solutions.

CONS: Google is a faraway third-place in market share (8 percent; AWS is at 33 percent, Azure at 16 percent), most likely because it doesn’t offer as many different services and features as AWS and Azure. It also doesn’t have as many global data centers as AWS or Azure, although it is quickly expanding. Gartner said that its “clients typically choose GCP as a secondary provider rather than a strategic provider, though GCP is increasingly chosen as a strategic alternative to AWS by customers whose businesses compete with Amazon, and that are more open-source-centric or DevOps-centric, and thus are less well-aligned to Microsoft Azure.”

This is a high-level comparison of two of the top three major cloud service leaders here in 2021. We will be updating this article with new information as it becomes available, and eWEEK will also be examining in closer detail the various services—computing, storage, networking and tools—that each vendor offers.

What is AWS?

Amazon Web Services (AWS) is a cloud service platform from Amazon, which provides services in different domains such as compute, storage, delivery and other functionality which help the business to scale and grow. AWS utilizes these domains in the form of services, which can be used to create and deploy different types of applications in the cloud platform or migrate apps to the AWS cloud. These services are designed in such a way that they work with each other and produce a scalable and efficient outcome. AWS services are categorized into three types: infrastructure as a service (IaaS), software as a service (SaaS) and platform as a service (PaaS). AWS was launched in 2006 and become the most-purchased cloud platform among currently available cloud platforms. Cloud platforms offer various advantages such as management overhead reduction, cost minimization and many others.

Important new AWS features for 2021

  • AWS Control Tower now includes an organization-level aggregator, which assists in detecting external AWS Config rules. This will provide you with visibility in the AWS Control Tower console to see externally created AWS Config rules in addition to those AWS Config rules created by AWS Control Tower. The use of the aggregator enables AWS Control Tower to detect this information and provide a link to the AWS Config console without the need for AWS Control Tower to gain access to unmanaged accounts.
  • Amazon Elastic Container Service (Amazon ECS) has launched a new management console. You can now create, edit, view, and delete Amazon ECS services and tasks, and view ECS clusters in fewer simpler steps. You can also learn about ECS capabilities and discover your ECS resources quickly and easily in the new console, as well as switch back to the existing console if needed. The new console will be continuously updated until all functionality from the existing console is available, and both consoles will be available to use until this time.
  • AWS IoT SiteWise Monitor now supports AWS CloudFormation, enabling customers to create and manage AWS IoT SiteWise Monitor resources such as portals, projects, dashboards, widgets, and properties using CloudFormation.
  • AWS Data Exchange Publisher Coordinator and AWS Data Exchange Subscriber Coordinator are new AWS Solutions Implementations that automate the publishing and consumption of data via AWS Data Exchange.
  • As of Jan. 1, 2021, users now can use additional controls on their Amazon WorkDocs Android application that enable them to execute workflows such as deleting, renaming, and adding files and folders to their Favorite list directly from the Folder List view. They can also rename as well as add a file or folder to a Favorite list for quick access and offline use from the Document Preview view. These additional controls now surfaced from the Folder List and Document Preview view further facilitate content collaboration for teams.

AWS Pros and Cons, Based on User Feedback

PROS: Amazon’s single biggest strength really turned out to be the fact that it was first to market in 2006 and didn’t have any serious competition for more than two years. It sustains this leadership by continuing to invest heavily in its data centers and solutions. This is why it dominates the public cloud market. Gartner Research reported in its Magic Quadrant for Cloud Infrastructure as a Service, Worldwide, that “AWS has been the market share leader in cloud IaaS for over 10 years.” Specifically, AWS has been the world leader for closer to 15 years, or ever since it first launched its S3 (Simple Storage Service) in fall 2006.

Part of the reason for its popularity is certainly the massive scope of its global operations. AWS has a huge and growing array of available services, as well as the most comprehensive network of worldwide data centers. Gartner has described AWS as “the most mature, enterprise-ready (cloud services) provider, with the deepest capabilities for governing a large number of users and resources.”

CONS: Cost and data access are Amazon’s Achilles heels. While AWS regularly lowers its prices—in fact, it has lowered them more than 80 times in the last several years, which probably means they were too high for starters. Many enterprises find it difficult to understand the company’s cost structure. They also have a hard time managing these costs effectively when running a high volume of workloads on the service. And customers, beware: Be sure you understand the costs of extracting data and files once they are in AWS’s storage control. AWS will explain it all upfront for you, but know that it’s a lot easier to start a process and upload files into the AWS cloud and access apps and services than to find data and files you need and move them to another server or storage array.

In general, however, these cons are outweighed by Amazon’s strengths, because organizations of all sizes continue to use AWS for a wide variety of workloads.

Go here to see eWEEK’s listing of the Top Cloud Computing Companies.

Go here to read eWEEK’s Top Cloud Storage Companies list.

This article is an update of a previous eWEEK study by Chris Preimesberger from 2019.

The post Google Cloud vs AWS 2022: Compare Features, Pricing, Pros & Cons appeared first on eWEEK.

]]>
https://www.eweek.com/cloud/aws-vs-google-cloud-platform/feed/ 0
How Williams Racing Moves its Data Center Nearly as Fast as its Cars https://www.eweek.com/it-management/how-williams-racing-moves-its-data-center-nearly-as-fast-as-its-cars/ https://www.eweek.com/it-management/how-williams-racing-moves-its-data-center-nearly-as-fast-as-its-cars/#respond Tue, 25 Aug 2020 21:58:50 +0000 https://www.eweek.com/uncategorized/how-williams-racing-moves-its-data-center-nearly-as-fast-as-its-cars/ The last time eWEEK looked at the role of computing in Formula 1 racing, a very lucky Chris Preimesberger visited the headquarters of Ferrari in Maranello, Italy. What he found there, in addition to some drool-worthy race cars, was a data center the size of a house with ten racks of equipment, power conditioning and […]

The post How Williams Racing Moves its Data Center Nearly as Fast as its Cars appeared first on eWEEK.

]]>
The last time eWEEK looked at the role of computing in Formula 1 racing, a very lucky Chris Preimesberger visited the headquarters of Ferrari in Maranello, Italy. What he found there, in addition to some drool-worthy race cars, was a data center the size of a house with ten racks of equipment, power conditioning and liquid-cooled rack servers. That was 11 years ago, and things have changed.

What hasn’t changed is the need for ever more computing power in Formula 1, and a growing need to have the computers as close as possible to the race itself. The extent to which these teams will go to ensure that they have been best, and fastest, computing support possible is demonstrated by Williams F1 Racing and its mobile data center. While Williams has a data center at the company headquarters in the UK, it also has a mobile data center that travels with the racing team.

“We have two racks of equipment that we carry around the world,” said Graeme Hackland, group CIO for Williams Racing. The Williams data center also includes workstations and power conditioning, including uninterruptible power supplies and generators, both of which serve to back up the local power wherever the race is being held. 

Hackland said that the data center used to be larger, but two changes reduced the number: virtualization and the ability to offload part of the computing tasks to the cloud. “We once had four racks of equipment before virtualization,” he said. “We funded our virtualization project in nine months in one season just from freight savings.” Hackland said that it costs about $300 per kilogram per race to ship the data center to the location for each race. 

The new, pop-up mobile data center

But there’s a lot more to operating a mobile data center than just shipping it. “We arrive in an empty garage,” Hackland explained. “A third party runs an internet cable to our garage. We run cables around our garage.” Hackland said they use a physical, cabled network at the garage because “WiFi becomes unreliable during the weekend.”

Hackland said that the Ethernet cabling is run to the pit area across the pit wall, and to what the team calls their motorhome–but which is really a hospitality area with offices. He said that the pit crew needs access to the network so they can make changes during the race. “We make sure they have 100 percent real time access to the pit wall,” Hackland said.

The popup data center (pictured) receives a continuous stream of encrypted telemetry from the car during the race, which allows the team to monitor it for potential adjustments and for problems that may be starting to appear. The network traffic that results is handled by Cisco Systems networking gear and firewalls and is stored trackside by Nutanix. Hackland said that the data center maintains a connection to the Federation Internationale de l’Automobile (FIA), which is the sanctioning body for Formula 1.

“We have 40 engineers who need it,” Hackland explained. He said that it then flows back to the UK over a 100 mbps link for 40 more engineers. “We burst data up to the cloud for additional computation capacity,” he said. A lot of the data is video of the Williams car and competitor analysis.

”We’re looking at things like cornering,” Hackland said. He noted that video of competing cars is used to see what they’re doing, and if the analysis shows something that’s suspect, then it can be used for a protest of the race results. Much of the analysis takes place after the cars first get on the track, so that it can be used for the race.

The team does a lot of analysis on Fridays to get the optimum setup on Saturday. In Formula 1, practice and qualifying take place on the Saturday before the race on Sunday. They do tire analysis on each driver and on other teams. Hackland said they need to know how many laps the car can go on which tire compound for each driver.

“We have a number of engineers and someone on the pit wall who talks to the drivers. The race engineer is talking to their driver,” Hackland said. He said that the team has about 300 sensors on the car that feed data back to the team. With that data, they get the car ready for qualifying.

Mobile but still very secure

“We’re talking about this mobile data center all around the world,” Hackland said. “We need to protect the data. We’re in the garage with other teams.” He said that while a team can be thrown out of the championship for stealing data from competing teams, that doesn’t eliminate the need to keep it secure. 

“What we do look at is protecting the cars,” Hackland said. “The car is a connected device, and you have to connect a cable to it to change the configuration. FIA has banned two-way telemetry for safety reasons.” He said that the data from the car is encrypted, but it needs to be available to the engineers so they can work on configuration settings on their laptops. He said that Williams uses Dtex Intercept and software from Symantec to keep track of data and to protect it against compromise.

What the mobile data center, and all of the extra effort that goes with it, provides is speed. Formula 1 teams need to get their data within microseconds to keep their car’s performance at the highest level constantly. Even delays incurred by sending data to the cloud or headquarters can be too much, and that means the data center has to be at the track. The trackside data center means the engineers can have access to the data instantly, and do it in a secure manner. 

Like the race itself, the differences are measured in microseconds.

Wayne Rash, a former executive editor of eWEEK, is a longtime contributor to our publication and a frequent speaker on business, technology issues and enterprise computing.

The post How Williams Racing Moves its Data Center Nearly as Fast as its Cars appeared first on eWEEK.

]]>
https://www.eweek.com/it-management/how-williams-racing-moves-its-data-center-nearly-as-fast-as-its-cars/feed/ 0
Ivanti, Intel Partner on Self-Healing Endpoints for Remote Workers https://www.eweek.com/it-management/ivanti-intel-partner-on-self-healing-endpoints-for-remote-workers/ https://www.eweek.com/it-management/ivanti-intel-partner-on-self-healing-endpoints-for-remote-workers/#respond Wed, 29 Jul 2020 04:16:21 +0000 https://www.eweek.com/uncategorized/ivanti-intel-partner-on-self-healing-endpoints-for-remote-workers/ Salt Lake City-based Ivanti, which automates IT and security operations to manage and secure data from cloud to edge, and processor-making market leader Intel have announced a new strategic partnership to offer Device-as-a-Service (DaaS) with self-healing capabilities for the next-generation workforce. DaaS helps enterprises save capital costs by taking a typical hardware device, such as […]

The post Ivanti, Intel Partner on Self-Healing Endpoints for Remote Workers appeared first on eWEEK.

]]>
Salt Lake City-based Ivanti, which automates IT and security operations to manage and secure data from cloud to edge, and processor-making market leader Intel have announced a new strategic partnership to offer Device-as-a-Service (DaaS) with self-healing capabilities for the next-generation workforce.

DaaS helps enterprises save capital costs by taking a typical hardware device, such as a laptop, desktop, tablet, or mobile phone, bundling it with a variety of services and software, and offering it to a customer for a monthly subscription fee. 

As a result of the new alliance announced last week, Intel Endpoint Management Assistant (Intel EMA) now integrates with the Ivanti Neurons hyper-automation platform, which enables IT organizations to self-heal and self-secure with Intel vPro platform-based devices–both inside and outside a corporate firewall.  

“As remote becomes the next normal, auto-healing, securing and servicing endpoints and edge devices becomes a key priority for organizations,” Nayaki Nayyar, Ivanti executive vice president and chief product officer, said in a media advisory. “With Ivanti Neurons, organizations supporting Intel vPro platform-powered devices can gain a 360-degree view of users, devices, and applications and auto-remediate performance, security, configuration issues.” 

With the integration of Intel Endpoint Management Assistant, Neurons provides next-generation remote management for on-premises and cloud-based endpoints. Ivanti Neurons can handle remote actions on Intel vPro platform-based devices, such as powering on a device, restarting a device, setting wake-up times and controlling a system–even during OS failure–and repairing devices at scale. 

This also takes into account security patching and other actions.

For more information, go here.

The post Ivanti, Intel Partner on Self-Healing Endpoints for Remote Workers appeared first on eWEEK.

]]>
https://www.eweek.com/it-management/ivanti-intel-partner-on-self-healing-endpoints-for-remote-workers/feed/ 0
#eWEEKchat Tuesday, July 14: ‘Next-Gen Networking Trends’ https://www.eweek.com/networking/eweekchat-tuesday-july-14-next-gen-networking-trends/ https://www.eweek.com/networking/eweekchat-tuesday-july-14-next-gen-networking-trends/#respond Thu, 09 Jul 2020 04:38:00 +0000 https://www.eweek.com/uncategorized/eweekchat-tuesday-july-14-next-gen-networking-trends/ On Tuesday, July 14, at 11 a.m. PDT/2 p.m. EDT/7 p.m. GMT, @eWEEKNews will host its 88th monthly #eWEEKChat. The topic will be  “Next-Gen Networking Trends,” and it will be moderated by eWEEK Editor Chris Preimesberger. Some quick facts: Topic: #eWEEKchat, July 14: “Next-Gen Networking Trends”   Date/time: Tuesday, July 14, 11 a.m. PDT / 2 […]

The post #eWEEKchat Tuesday, July 14: ‘Next-Gen Networking Trends’ appeared first on eWEEK.

]]>
On Tuesday, July 14, at 11 a.m. PDT/2 p.m. EDT/7 p.m. GMT, @eWEEKNews will host its 88th monthly #eWEEKChat. The topic will be  “Next-Gen Networking Trends,” and it will be moderated by eWEEK Editor Chris Preimesberger.

Some quick facts:

Topic: #eWEEKchat, July 14: “Next-Gen Networking Trends”  

Date/time: Tuesday, July 14, 11 a.m. PDT / 2 p.m. EDT / 7 p.m. GMT

Participation: You can use #eWEEKchat to follow/participate via Twitter itself, but it’s easier and more efficient to use the real-time chat room link at CrowdChat. Instructions are on that page; log in at the top right, use your Twitter handle to register, and the chat begins promptly at 11 a.m. PT. The page will come alive at that time with the real-time discussion. You can join in or simply watch the discussion as it is created. Special thanks to John Furrier of SiliconAngle.com for developing the CrowdChat app.

Our in-chat experts this month are: Mike Anderson, VP of Marketing, Stateless; Larry Lunetta, VP of WLAN and Security Solutions Marketing, Aruba; Charles Cheevers, CTO of Home Networks, CommScope; Sivan Tehila, Director of Solution Architecture, Perimeter 81; Tony Cai, Partner Sales Executive, Nerdio; Amy Abatangle, CMO of Netdata; and Matt Mangriotis, Cambium’s director of product management. Check back for late additions.

Chat room real-time link: Use https://www.crowdchat.net/eweekchat. Sign in with your Twitter handle and use #eWEEKchat for the identifier.   

Next-Gen Networking: What exactly are the trends?

For decades, IT networking was all about wiring, plugging cables into the right ports, and using firewalls for security. While those conventions are still a mainstay of the connected world, innovation has changed the industry enormously in the last eight to 10 years. The data center industry has long since moved to larger-pipe connectivity (Ethernet, Infiniband), central network controls, automated storage and security, built-in intelligence—to mention only a few upgrades.

Software-defined networking (SDN), SD-WAN (wide-area networks) and other network virtualization technologies have driven the conversation in the industry for the past few years. However, for all the talk about SDN and SD-WAN (there have been hundreds of analyst reports and thousands of news stories written about it), the tech world is still in the relatively early stages of these and other innovations, such as network-functions virtualization (NFV). Still, there are more than a few network administrators who continue to think it might be overkill to overhaul a system for a small or medium-size business when the current one works just fine.

The problem is that the amount of data being generated is not about to slow down or be reduced any time soon. Industry analysts have calculated that all the data racked up in the world in one month in 2020 probably totals more than the data stored in the entire year of 2019. Businesses need to keep up with this data growth in order to stay competitive in their markets; should someone slip, others gain advantage that they might not relinquish for a while.

Lots of upgrading now in progress

With the advent of widespread SD-WAN, WiFi 6 and 5G on the horizon, plenty of key decision-makers are, in fact, currently upgrading their networks. Speed in moving data streams to where the computing is taking place, and vice-versa, has never been more important to businesses–local or global.

WiFi 6, for one example, enables speeds to improve, latency to recede and cause familiar limitations of WiFi to vanish. The relatively fallow ground of 6GHz means that compromises due to legacy devices would be gone, making WiFi something that you could use anywhere in the office or on the production floor.

Imagine WiFi 6 at 60GHz. With all of that extra bandwidth, wireless capacity would move far beyond the current limitations of fiber networks in the office. While there will still be a role for fiber outside of the office, inside the office, 60GHz WiFi 6 will simplify enterprise networking by providing a multi-gigabit infrastructure without the disruption of cabling or the expense of wired infrastructure.

Cambium Networks is one of the forward-looking networking companies that is already providing hardware and software infrastructure around WiFi6. “From New York to Naples to Nigeria, everyone wants super-fast wireless connections,” Cambium CEO Atul Bhatnagar said. “By bringing together Wi-Fi 6 and 60GHz solutions with cloud software, we’re changing the game with unified wireless that can serve any city, any enterprise, any school, any business or any industry at a fraction of the cost of wired networks. With this new wave of technology, wireless is the new fiber, and it simply just works.” 

Seed questions

Certainly 5G connectivity will be a huge improvement over 4G LTE in the wireless world. That will be a major talking point in our #eWEEKchat. Here are examples of seed questions we’ll pose to our audience on July 14:

  • Will WiFi 6 become the next backbone-type network? Why or why not?
  • What is intent-based networking and what are its business advantages?
  • How is SD-WAN able to converge network, security and AI all in one?
  • How and why is NVIDIA, one of the world’s top graphics processor makers, becoming a leader in next-gen networking?
  • Who are some of the young networking startup “stars” of the business and what new functionality do they bring to the table?

Join us July 14 at 11 a.m. Pacific/2 p.m. Eastern/7 p.m. GMT for an hour. Chances are good that you’ll learn something valuable.

#eWEEKchat Tentative Schedule for 2020*

xJan. 8: Trends in New-Gen Data Security
xFeb. 12: Batch Goes Out the Window: The Dawn of Data Orchestration
xMarch 11: New Trends and Products in New-Gen Health-Care IT
xApril 8: Trends in Collaboration Tools
xMay 12: Trends in New-Gen Mobile Apps, Devices
xJune 9: Data Storage, Protection in a Hypersensitive Era
July 14: Next-Gen Networking
Aug. 11: Next-Gen Cloud Services and Delivery
Sept. 8: tentative: DevSecOps: Open Source Security and Risk Assessment
Oct. 13: DataOps: The Data Management Platform of the Future?
Nov. 10: Hot New Tech for 2021
Dec. 8: Predictions and Wild Guesses for IT in 2021

*all topics subjects to change
x=completed

The post #eWEEKchat Tuesday, July 14: ‘Next-Gen Networking Trends’ appeared first on eWEEK.

]]>
https://www.eweek.com/networking/eweekchat-tuesday-july-14-next-gen-networking-trends/feed/ 0
Deploying the Best of Both Worlds: Data Orchestration for Hybrid Cloud https://www.eweek.com/it-management/deploying-the-best-of-both-worlds-data-orchestration-for-hybrid-cloud/ https://www.eweek.com/it-management/deploying-the-best-of-both-worlds-data-orchestration-for-hybrid-cloud/#respond Fri, 26 Jun 2020 22:09:00 +0000 https://www.eweek.com/uncategorized/deploying-the-best-of-both-worlds-data-orchestration-for-hybrid-cloud/ Did someone tell your data to shelter in place? That wouldn’t make any sense, would it? Ironically, for vast troves of valuable enterprise data, that might as well be the case, because massive, compute-bound data silos are practically everywhere in the corporate world. Hadoop played a role in creating this scenario, because many large organizations […]

The post Deploying the Best of Both Worlds: Data Orchestration for Hybrid Cloud appeared first on eWEEK.

]]>
Did someone tell your data to shelter in place? That wouldn’t make any sense, would it? Ironically, for vast troves of valuable enterprise data, that might as well be the case, because massive, compute-bound data silos are practically everywhere in the corporate world.

Hadoop played a role in creating this scenario, because many large organizations sought to leverage the twin promises of low-cost storage and massively parallel computing for analytics. But a funny thing happened to the yellow elephant: It was largely obviated by cheap cloud storage.

Seemingly overnight, the price of cloud storage dropped so precipitously that the cost-benefit analysis of using the Hadoop Distributed File System (HDFS) on-premises for new projects turned upside down. Even the term “Hadoop” disappeared from the names of major conferences.

That’s not to say there isn’t valuable data in all those HDFS repositories, however. Many important initiatives used this technology in hopes of generating useful insights. But with budgets moving away from Hadoop, another strategy is required to be successful. 

What about computing? Suffice to say, cloud providers now offer a robust service in this territory. And relatively recent innovations such as separating computing from storage have also played a part in paving the way for cloud-based computing to take on all manner of workloads.

So the cloud now easily eclipses most on-premises environments in all the major categories: speed, cost, ease of use, maintenance, scalability. But there are barriers to entry; or at least pathways that should be navigated carefully while making the move. Mistakes can be very costly!

But how do you get the data there? Amazon actually offers a “snow truck” that will come to your data center, load it up one forklift at a time, and haul it, old-school, to its facility. That approach can certainly work for a quick-and-relatively-dirty solution, but it ignores the magic of cloud.

Seeing Triple

As the concept of “cloud-native” gets hashed out on whiteboards in boardrooms around the business world, the reality taking shape is that a whole new generation of solutions is being born. These systems are augmented with high-powered analytics and artificial intelligence. 

This new class of application is almost exclusively built on a microservices architecture with Kubernetes as the foundation. There is tremendous value to this approach, because scalability is built into its DNA. Taking advantage of this new approach requires a commitment to change.

Simply shipping your data and applications en toto to a cloud provider absolutely does not solve this challenge. In fact, it will likely result in a significant rise in total cost of ownership (TCO), thus undermining a major driver for moving to the cloud.

Another strategy involves porting specific data sets into the cloud to deploy the power of all that computation. This often involves making copies of the data. While change-data-capture can be used to keep these disparate environments in sync, there are downsides to this approach.

In the first place, CDC solutions always need to be meticulously managed. Small amounts of data drift can quickly become larger problems. This is especially problematic when the derived analytics are used for mission-critical business decisions, or customer-experience initiatives.

Secondly, by going down this road, organizations risk the proliferation of even more data silos–this time in the cloud. And while cloud storage is getting cheaper, the cost of egress can creep up and throw budgets sideways; this is not good in a post-COVID world.

Remember, the standard redundancy of Hadoop was to have three copies of every datum, which is good for disaster recovery but rather taxing overall, both in terms of throughput and complexity. While moving into the new world of cloud computing, we should avoid old errors.

Agile Defined

A different approach to bridging the worlds of on-prem data centers and the growing variety of cloud computing services is offered by a company called Alluxio. From their roots at the Berkeley Amp Labs, they’ve been focused on solving this problem.

Alluxio decided to bring the data to computing in a different way. Essentially, the technology provides an in-memory cache that nestles between cloud and on-prem environments. Think of it like a new spin on data virtualization, one that leverages an array of cloud-era advances.

According to Alex Ma, director of solutions engineering at Alluxio: “We provide three key innovations around data: locality, accessibility and elasticity. This combination allows you to run hybrid cloud solutions where your data still lives in your data lake.”

The key, he said, is that “you can burst to the cloud for scalable analytics and machine-learning workloads where the applications have seamless access to the data and can use it as if it were local–all without having to manually orchestrate the movement or copying of that data.”

In this sense, Alluxio’s approach bridges the best of both worlds: You can preserve your investments in on-prem data lakes while opening a channel to high-powered analytics in the cloud, all without the encumbrance of moving massive amounts of data here or there. 

“Data locality means bringing the data to compute, whether via Spark, Presto, or Tensorflow,” Ma said. In this scenario, Alluxio is installed alongside the compute framework and deploys unused resources on those servers to provide caching tiers for the data. 

Options, Options

There are various ways to get it done, depending upon the topology of the extant information architecture. In some environments, if Presto is using a lot of memory, Alluxio can allocate SSDs on the appropriate machines for optimized caching. 

If you’re tying into HDFS, Presto can make the request, and Alluxio’s intelligent multi-tiering then uses whatever the most efficient approach might be–spanning memory, SSD or spinning disc. It all can be optimized as Alluxio monitors data access patterns over time. 

Regardless of which tools an organization uses–Tensorflow, Presto, Spark, Hive–there will be different usage patterns across CPU, GPU, TPU and RAM. In the case of RAM and available disk types, Alluxio can work with whatever resources are available. 

“Spark is less memory-intensive,” Ma said, “so we can allocate some memory. So you have choice to figure out what you want to allocate and where. Alluxio allows you to seamlessly access the data in the storage area, wherever it may be.” 

There’s also the concept of a Unified Name Space. “What it allows you to do is have a storage configuration that’s centrally managed,” Ma said. “You’re not going into Spark and Presto to set it all up; you’re able to configure Alluxio once, and then Spark or Presto communicate to Alluxio.”

The general idea is to create a high-speed repository of data that allows analysts to get the speed and accuracy they demand without giving into the temptation of data silos. Think of it as a very large stepping stone to the new normal of multi-cloud enterprise computing.

“With Alluxio, we sit in the middle and offer interfaces on both sides; so we can talk to a variety of different storage layers,” Ma said. “We act as a bridging layer, so you can access any of these technologies.” In short, you can have your data cake and eat it too. 

Like any quality abstraction layer, solving the data challenge in this manner enables companies to leverage their existing investments. Data centers will have a very long tail, and cloud services will continue to evolve and improve over time. Why not get the best of both worlds?

Eric Kavanagh is CEO of The Bloor Group, a new-media analyst firm focused on enterprise technology. A career journalist with more than two decades of experience in print, broadcast and Internet media, he also hosts DM Radio, and the ongoing Webcast series for the Global Association of Risk Professionals (GARP).

The post Deploying the Best of Both Worlds: Data Orchestration for Hybrid Cloud appeared first on eWEEK.

]]>
https://www.eweek.com/it-management/deploying-the-best-of-both-worlds-data-orchestration-for-hybrid-cloud/feed/ 0
How IT Can Fuel Strong Collaboration Among Remote, WFH Employees https://www.eweek.com/mobile/how-it-can-fuel-strong-collaboration-among-remote-wfh-employees/ https://www.eweek.com/mobile/how-it-can-fuel-strong-collaboration-among-remote-wfh-employees/#respond Tue, 16 Jun 2020 04:23:00 +0000 https://www.eweek.com/uncategorized/how-it-can-fuel-strong-collaboration-among-remote-wfh-employees/ In the last three months, our workplaces and workspaces have changed, big time. Conference rooms, white boards and in-person meetings have given way to collaboration applications, virtual meetups and home offices amidst the COVID-19 crisis. Collaboration applications are helping maintain work continuity in a major way–from video meetings to always-on, real-time chat. eWEEK‘s community talked […]

The post How IT Can Fuel Strong Collaboration Among Remote, WFH Employees appeared first on eWEEK.

]]>
In the last three months, our workplaces and workspaces have changed, big time. Conference rooms, white boards and in-person meetings have given way to collaboration applications, virtual meetups and home offices amidst the COVID-19 crisis.

Collaboration applications are helping maintain work continuity in a major way–from video meetings to always-on, real-time chat. eWEEK‘s community talked about this trend in a recent #eWEEKchat discussion here on CrowdChat.net/eweekchat.

When it comes to collaboration, for example, Zoom usage has skyrocketed since mandatory work-from-home orders went into effect. At the end of 2019, Zoom reported 12.9 million active users; by April, usage had skyrocketed to more than 200 million users—most of whom were consumers, not only businesses. 

As we’ve adjusted to the virtual office, IT has been tasked with ensuring that the right technology is matching with the right employees to drive maximum companywide productivity—with all-remote teams, of course. In this eWEEK Data Points article, Productiv CEO Jody Shapiro outlines how IT can best support collaboration and productivity among an all-remote workforce.

Data Point No. 1: IT leaders first need visibility into which applications are being used by whom, and how these apps are being used

If three of my business units are collaborating in Slack and one unit is using Teams, that’s a potential problem. This collaboration friction can sap productivity and increase frustration among teams as workers search for files while toggling between Box and OneDrive. And remember, they’re just a credit card click away from purchasing their own collaboration tools and stitching together shadow IT apps.

Just as personal laptops lack the security safeguards that protect company-provisioned devices, shadow IT comes with its own set of risks; specifically, shadow IT apps increase collaboration silos and undercut the uniform employee software experience. IT leaders need a centralized and accurate list of applications and licenses to begin to understand workplace collaboration. They also need the real story behind the application count, or how we are engaging with these apps.

Data Point No. 2: IT can examine application engagement data and then identify overlapping applications

If there are applications with similar functionality deployed across various teams, you need to know this. The goal is to understand which apps are being used the most heavily, so you can then migrate employees off  lesser-used apps and standardize on the best one. This eliminates overlaps and what we call app sprawl. It’s important that you evaluate redundancies by examining engagement data rather than login data alone. If your organization has 1,500 people provisioned on Microsoft Teams and 1,400 people provisioned on Slack, the numbers appear almost equal. But if we look at engagement–or what happens after someone logs in–we see that logins do not equal engagement.

Data Point No. 3: Once you identify overlapping applications, you need to look at engagement at the feature level. 

Look at overall usage, the types of activities people are performing and whether collaboration is happening inside or outside of an organization. Are you getting the most out of each application? What percentage of users are sharing video during Zoom meetings? Are files being shared in Box? If we drill down on features in the Teams/Slack example mentioned above, we might see that employee Slack engagement is 10 times greater than in Teams. This feature-level engagement data gives a more complete picture of how your employees are collaborating remotely.

Data Point No. 4: Next, use this feature-level engagement data to adjust existing application licenses as well as forecast changing application use patterns. 

Perhaps your sales team has Zoom’s Pro licenses, because most virtual meetings are longer than 40 minutes. Meanwhile, an engineer who has fewer meetings may not need such a license. But during the present remote work surge, you need to adjust your license provisioning strategy.

During this new WFH norm, you may have more team members spending more time on Zoom. Your company’s leaders may schedule more video-enabled Zoom meetings to foster a sense of community during this time. Thus, you may need different license tiers for employees. IT leaders can also get regular updates on an application-by-application basis and automatically deprovision, upgrade, or downgrade licenses based on individual usage patterns.

Data Point No. 5: After you have tracked application engagement over time by team, feature and device–and made relevant “right size” license adjustments–you can begin to collect and compare benchmark data

Collect trend data from application engagement based on app, industry, geography, company size, etc. This allows you to compare your company’s business collaboration metrics with similar data from other organizations. Benchmarking your organization’s application provisioning against industry benchmarks  also ensures that your application selection is being done in the most data-driven way possible.

Data Point No. 6: Summary

IT has a key role in enabling effective, efficient collaboration within an organization, and this role is even more critical when all employees are working remotely. By following the aforementioned steps, IT leaders can pass the remote work “stress test.” They can crack open the application engagement black box, gaining visibility into how employees collaborate. This, in turn, helps IT eliminate redundancies and make real-time adjustments as workers’ collaboration needs change. 

If you have a suggestion for an eWEEK Data Points article, email cpreimesberger@eweek.com.

The post How IT Can Fuel Strong Collaboration Among Remote, WFH Employees appeared first on eWEEK.

]]>
https://www.eweek.com/mobile/how-it-can-fuel-strong-collaboration-among-remote-wfh-employees/feed/ 0
How Delphix is Speeding Up Shift to Digital for Enterprises https://www.eweek.com/it-management/how-delphix-is-speeding-up-shift-to-digital-for-enterprises/ https://www.eweek.com/it-management/how-delphix-is-speeding-up-shift-to-digital-for-enterprises/#respond Sat, 06 Jun 2020 03:50:00 +0000 https://www.eweek.com/uncategorized/how-delphix-is-speeding-up-shift-to-digital-for-enterprises/ Not too many companies actually come out and say something like this, but Delphix, which originated the DataOps school of new-gen data management, has declared completed the original mission it undertook a decade ago when it launched its first product. That mission was to provide a full-service data platform to accelerate digital and customer-experience transformation for […]

The post How Delphix is Speeding Up Shift to Digital for Enterprises appeared first on eWEEK.

]]>
Not too many companies actually come out and say something like this, but Delphix, which originated the DataOps school of new-gen data management, has declared completed the original mission it undertook a decade ago when it launched its first product. That mission was to provide a full-service data platform to accelerate digital and customer-experience transformation for global enterprises. 

Sounds simple, but rest assured it is not; DataOps is a bit complicated. Matt Aslett, Research Vice President of Data, AI and Analytics at 451 Research, defines DataOps this way: “The alignment of people, process, and technology to enable more agile and automated approaches to enterprise data management in service of business goals. It aims to provide easier access to enterprise data to meet the demands of various stakeholders who are part of the data supply chain (developers, data scientists, business analysts, DevOps professionals, etc.) in support of a broad range of use cases.”

Here’s why Delphix is claiming its 10-year accomplishment: With physical lockdowns rolling across the world, companies have accelerated the shift from physical to digital business models and operations. Enterprises need to adapt quickly to these changing market conditions or risk being disrupted by more agile competitors. And DataOps is nothing if not a quick method to help enterprises do this.

“With our latest platform release, we’ve completed our original mission,” Jedidiah Yueh, Delphix founder and CEO (pictured), said. “In order to drive digital or customer experience transformations, companies need to harness data across a multi-generational range of systems, from ERP implementations to apps running on the mainframe. Our platform collects data across these systems to fuel cloud, AI, and other digital transformation programs—cutting time and cost by more than 20%.”

One mission all along: To help companies do more with their data

eWEEK was the first publication to announce the founding of Delphix in 2008, during the last financial crisis. Since then, the publication has followed it closely as it has grown to more than 500 employees and to large new quarters in Redwood City, Calif. All along, its mission has been to simply to help companies do more with their data. 

——————————————————————————————

Go here to view an eWEEK eSPEAKS video interview with Delphix CEO Jed Yueh.

——————————————————————————————

The Delphix DataOps Platform started with virtualizating Oracle databases and applications so that database administrators could patch and repair them without having to shut down the entire system. Early customers quickly loaded ERP systems like SAP and Oracle EBS onto the platform, and those, too, achieved impressive results. 

After Oracle databases and applications, Delphix added support for Microsoft SQL Server, IBM DB2, SAP ASE, SAP HANA, and other major applications and database platforms. 

“Customers need a comprehensive data platform to help drive transformations,” Yueh said. “Otherwise, it’s like trying to fly a plane without sufficient instrumentation. With our most recent platform release, we have completed everything on our initial roadmap.”

What the DataOps package now includes

In 2019, Delphix made its data platform SDKs available to ISVs. Since launching its SDKs and the DataOps marketplace, Delphix data coverage has expanded to include:

  • legacy platforms: i.e. mainframe, Informix;
  • new data platforms: MongoDB, CouchDB;
  • cloud platforms: Aurora, RedShift, Azure SQL;
  • workflow, monitoring platforms: ServiceNow, Splunk;
  • automation platforms: Ansible, Terraform, Jenkins, Chef.

In addition, during the last decade, the Delphix platform has achieved a series of significant scale milestones, while supporting many of the world’s largest companies on their transformation programs:

  • more than 100,000 development, testing, and analytics environments;
  • more than 30,000 customer applications released annually;
  • more than 2 million automated data operations;
  • more than 2 exabytes of customer data across private and public clouds.

The Delphix DataOps Platform provides critical features that instrument data across the application lifecycle, including:

  • virtualization, which radically shrinks data footprint;
  • time machine to travel to any point in time;
  • version control, which lets developers and data scientists manage data like code;
  • compliance, which includes profiling, templates, and automated data masking;
  • integration to enable fast data synchronization across systems;
  • replication to move data across private and public clouds;
  • automation to integrate with DevOps and CI/CD tools.

Company’s foundation firmly in place

It took Delphix a decade to achieve its original mission, which laid the foundation for enterprise-wide data coverage. During that period, Delphix also built a profitable business with more than $100 million in annual recurring revenue. Now the company is looking to the future.

“With the foundation in place, the next 10-year horizon is focused on unlocking the full value of the data in our platform,” said Yueh. “Cloud, AI, and the regulatory horizon are forming the perfect storm. Our native data operations, APIs, and SDKs make it increasingly easy for customers and ISVs to weave data across their business processes, compliance, and transformation programs. And we’ll continue to invest in building out that ecosystem.

“In the end, we believe that every company is a data company if it wants to survive. And if companies want to master data operations, they’ll need a purpose-built platform to manage data across the ongoing transformation lifecycle.”

One longtime Delphix user offered his take on the company.

“We use Delphix to synchronize and extract student data from on-prem to AWS for the CSU Chancellor’s Office,” said Rudy Gonzales, Unisys Program Director at California State University. “Now that we have a super-fast data bridge to the cloud, we’re able to harness AI technologies to drive better student outcomes.

“I’m excited to see where Delphix takes its data platform next.”

The post How Delphix is Speeding Up Shift to Digital for Enterprises appeared first on eWEEK.

]]>
https://www.eweek.com/it-management/how-delphix-is-speeding-up-shift-to-digital-for-enterprises/feed/ 0
IBM Think 2020 Digital: Building Reliability, Resiliency in Uncertain Times https://www.eweek.com/innovation/ibm-think-2020-digital-building-reliability-resiliency-in-uncertain-times/ https://www.eweek.com/innovation/ibm-think-2020-digital-building-reliability-resiliency-in-uncertain-times/#respond Fri, 15 May 2020 23:27:00 +0000 https://www.eweek.com/uncategorized/ibm-think-2020-digital-building-reliability-resiliency-in-uncertain-times/ In the normal course of tech industry happenings, spring is the season of Tier 1 vendor conferences for customers and partners. By this time last year, I had attended CES, IBM Think and PartnerWorld, and Dell Tech World, and was preparing to fly to Orlando, Fla., for Lenovo Accelerate. This year the COVID-19 pandemic has […]

The post IBM Think 2020 Digital: Building Reliability, Resiliency in Uncertain Times appeared first on eWEEK.

]]>
In the normal course of tech industry happenings, spring is the season of Tier 1 vendor conferences for customers and partners. By this time last year, I had attended CES, IBM Think and PartnerWorld, and Dell Tech World, and was preparing to fly to Orlando, Fla., for Lenovo Accelerate. This year the COVID-19 pandemic has acted like a viral monkey wrench, causing chaos across global businesses and economies.

In these abnormal times, what can IT vendors do to address the fundamental challenges, concerns and fears that their customers and partners are suffering? We learned quite a bit about that at last week’s IBM Think 2020 Digital conference, an event in which the company shifted its annual conference to an online format. Let’s consider how IBM engaged and communicated with participants, as well as a few of the new and updated offerings the company introduced during Think 2020 Digital.

Arvind Krishna: Envisioning a Post-COVID World

For experienced conference attendees, the keynote address from the host vendor’s CEO is a must-see event. Why so? Because along with providing insights into the company he or she leads, these keynotes typically touch on all of the key themes and announcements that will be restated and reinforced in other keynotes and strategic presentations during the conference. In essence, the CEO keynote acts as a microcosm of the larger event.

But IBM Think 2020 was also the debut of new CEO Arvind Krishna. After having been chosen for the role on Jan. 30 when the potential shape and impact of COVID-19 was just becoming widely known, Krishna’s first day on the job was April 6. New chief executives often arrive at inopportune times, especially those leading organizations in distress. However, it is difficult to think of a case similar to Krishna becoming IBM’s CEO as the novel coronavirus was shaking governments, markets and companies everywhere to their foundations.

So how did Krishna do at Think 2020? Extremely well, overall. He began his keynote on a broadly personal note, acknowledging the extreme challenges and stress that IBM’s customers and partners are laboring under, and expressing his appreciation for their participation in the conference. But Krishna also stated that while COVID-19 “is a powerful force of disruption and an unprecedented tragedy,” it “is also a critical turning point … an opportunity to develop new solutions, new ways of working and new partnerships.”

In addition, he emphasized the value that his company’s long history and experience offers to customers and partners. “That IBM has been here before gives me perspective and confidence. This will be seen as the moment when the digital transformation of businesses and society accelerated, and together we laid the groundwork for a post-COVID-19 world. Let’s get to work.”

That focus on defining IBM’s roles as a reliable ally and model of business resiliency set a powerful tone for the rest of the keynote.

The Imperatives of Hybrid Cloud and AI

Krishna continued with what he called the Four Imperatives of Hybrid Cloud (partly inspired, he said, by VMware CEO Pat Gelsinger’s “Five Imperatives for Digital Business”). They are:

  1. History: Companies rarely start from scratch. All carry years of IT decision-making in complex workloads, apps and systems integrated with operations and security. Hybrid cloud meets you in terms of the IT choices you’ve made and the places you do computing.
  2. Choice: Relying on one cloud platform or operator automatically locks you in. Hybrid cloud breaks those shackles.
  3. Physics: No single solution does it all. Workloads are limited by the speed of light. To maximize performance, IT systems need to be physically close to data and systems.
  4. Law: Where you are located physically impacts the way you do business in terms of law, privacy and security regulations and compliance practices. Hybrid cloud solutions can be designed to fully address those issues.

Krishna noted another imperative—data sovereignty—whose evolution is still in process. But he also underscored how IBM’s efforts around hybrid cloud, including “big bold bets” like the Red Hat acquisition, are designed to deliver solutions that support, extend and enhance these imperatives and the customer value they represent.

Krishna’s discussion of AI was framed in practical terms related to one of the more shocking effects of the COVID-19 pandemic: the near wholesale failure of global supply chains to provide critically important goods, including personal protective equipment (PPE) for health care professionals and other front-line personnel.

“How can businesses’ supply chains become more resilient to global shocks?” Krishna asked. In significant part by leveraging AI and automation technologies to handle everyday tasks, thus enabling IT management and staff to focus their energies on higher value efforts. AI and automation tools also have valuable roles to play in the deployment and management of secure infrastructures that support new business models and processes, such as working from home (WFH).

Krishna pointed out that though COVID-19 is causing enormous challenges, “it is also underscoring the need for IT platforms and solutions that enable speed, flexibility, insight and innovation.” In addition, he noted how the pandemic has highlighted “why choosing which IT platforms to work with is the most consequential decision business people can make.”

Think 2020 Digital Announcements

IBM made a number of significant announcements at Think 2020 Digital, but I found two particularly notable: the company’s new solutions and initiatives for edge computing in the 5G era and the new enhancements and features in its Cloud Pak for Data v 3.0 portfolio.

  • IBM’s edge computing offerings combine the company’s cloud expertise and experience crafting vertical industry solutions and services with Red Hat’s OpenShift platform for hybrid multicloud deployments. Solutions include the IBM Edge Application Manager capable of enabling the management of up to 10,000 edge nodes by a single administrator; the IBM Telco Network Cloud Manager, which enables service providers (SPs) to manage workloads on both Red Hat OpenShift and Red Hat OpenStack; and edge-enabled versions of IBM Visual Insights, IBM Maximo Production Optimization, IBM Connected Manufacturing, IBM Asset Optimization, IBM Maximo Worker Insights and IBM Visual Inspector. The company also announced a new dedicated IBM Services team for edge computing deployments. In addition, the company announced the IBM Edge Ecosystem and Telco Network Cloud Ecosystem, groups of like-minded vendors who will work together to help customers in edge computing deployments. The Ecosystems are made up of equipment manufacturers, IT and networking vendors and software providers, including Cisco, Dell Technologies, Intel, Juniper, NVIDIA, Samsung and many others. Finally, the announcement provided testimonials by three early adopters of IBM’s edge solutions and services: Vodafone, Samsung and G-Evolution.
  • IBM’s Cloud Pak for Data is a fully integrated data and AI platform that modernizes and simplifies how businesses collect, organize and analyze data to infuse AI functions into their businesses. Since it was launched in the second half of 2018, Cloud Pak for Data has steadily evolved via new and enhanced features. During Think 2020 Digital, IBM announced new additions to Cloud Pak for Data 3.0, including a revamped Unified User Interface that makes navigating and scaling the platform easier and more intuitive, IBM Planning Analytics and IBM InfoSphere Master Data Connect service extensions, the ability to pull in data from The Weather Company, and the addition of IBM InfoSphere Virtual Data Pipeline (for an extra layer of security and control) and Watson OpenScale’s Model Risk Management (for automating the active testing of AI models throughout their life cycle). The new 3.0 version of Cloud Pak for Data also includes the ability to run on IBM’s Power Systems servers (in addition to x86-based systems), which should be welcomed by a substantial number of the company’s Power System customers.

In addition to the edge computing platform for 5G and the new/enhanced Cloud Pak for Data features, IBM announced Watson AIOps, which uses AI to automate the detection and diagnosis of and response to IT anomalies in real time, as well as a new cloud platform that supports the stringent security and privacy requirements of financial services companies. Finally, IBM revealed revisions to PartnerWorld, its business partner network, that includes clear pathways for creating applications, developing code, integrating intellectual property (IP) or delivering services with the IBM Cloud.

Final Analysis

So, what’s the most important takeaway from IBM Think 2020 Digital? The company deserves congratulations for a conference that offered support to distressed customers and partners while providing a nuanced view of the roles it can play as both a vendor and ally.

The new and enhanced solutions announced during the conference all reflected or clarified an essential point about IBM: that few if any IT vendors can claim greater understanding of or deeper insights into so wide a range of enterprise businesses, vertical industries and global markets.

It is also worth noting that at the same time IBM was discussing how customers and partners could successfully face the challenges before them and successfully adapt to changing circumstances, Krishna described how IBM itself intends to evolve.

At the end of his keynote, Krishna stated IBM’s commitment to four essential goals: 1) continuing to apply and deepen its understanding of business and industry needs, requirements and opportunities, 2) furthering its leadership in developing and delivering good, trustworthy technologies, 3) fostering a more entrepreneurial culture that is easier for customers to work with, and 4) remaining obsessed with listening to clients’ needs, preparing them for an uncertain future and supporting their journeys to hybrid cloud and AI.

By the conclusion of Think 2020 Digital, it was clear that IBM understands the challenges that businesses face in these far-from-normal times. It is also clear that the company plans to use its historical and business perspective to be a reliable ally and help customers and partners develop the resiliency they need for what will be, at best, a difficult journey.

Charles King is a principal analyst at PUND-IT and a regular contributor to eWEEK.  © 2019 Pund-IT, Inc. All rights reserved.

The post IBM Think 2020 Digital: Building Reliability, Resiliency in Uncertain Times appeared first on eWEEK.

]]>
https://www.eweek.com/innovation/ibm-think-2020-digital-building-reliability-resiliency-in-uncertain-times/feed/ 0