Skip to main content
Build the future with Agentforce at TDX in San Francisco or on Salesforce+ on March 5–6. Register now.

Choose the Right Compute Service

Learning Objectives

After completing this unit, you will be able to:

  • Differentiate between compute service offerings.
  • Use best practices when deciding between virtual machines, containers, and serverless for your workload.

When you evaluate whether or not to use virtual machines, containers, and serverless options, you have to pair it with your workload characteristics and constraints. In this unit, you learn how to choose the correct compute option for your workload through a series of scenarios. These selections may vary from person to person, so consider the following scenarios as suggestions.

Scenario 1: Determine the Cutest Kitten

Let’s say you want to develop a tournament to decide the cutest kitten on your cat photo app. The picture volume is so high that you may need the help of machine learning to evaluate the material by filtering submissions that do not contain a feline. 

You can run this workload on the same compute platform as your cat photo application, but it may be better suited for another. 

The workload has the following characteristics.

  • It’s a small feature that you are building from scratch.
  • You want to focus on your code and not the infrastructure maintenance.
  • The feature requires little control over the operating system.
  • Since the picture volume is so high, you need the feature to scale seamlessly.
  • You don’t know how to configure web servers, application servers, and databases for this kind of workload.
  • You want a cloud-native solution that inherits scaling and resilience out-of-the-box operating in multiple AWS Regions for latency and high-availability purposes.

For this tournament feature, using serverless, such as AWS Lambda, is the best choice because AWS Lambda enables you to scale seamlessly to handle a high volume of cat pictures coming in. It also ensures you focus on your code and not your infrastructure. To build this feature, you use serverless compute with a managed machine learning service.

Scenario 2: Migrate from On-Premises

Let’s take a look at another scenario: You are migrating an application for a local veterinary that allows pet owners to log in to a secure portal to view their pet health records and information. This application already exists and runs on a virtual machine on-premises. The existing technology requires a custom solution and depends on the resources and functionalities of an operating system.

The workload has the following characteristics.

  • Some processes on the backend need to run as specific OS users, so you need full control over the operating system of your choice.
  • To protect sensitive pet health records, the workload needs to be isolated due to regulatory requirements.
  • You are not using containers, don’t have the budget to train your staff to use them.
  • You need to migrate the application quickly with minimal changes because the on-premises co-location annual contract will expire soon.

Since this workload already uses virtual machines on-premises and you need to migrate the application quickly with few changes, using a virtual machine approach, such as Amazon EC2, would be the ideal in AWS. Amazon EC2 provides you full control over the operating system of your choice and create an isolated environment for your application. 

This enables you to run your sensitive workloads that need to be isolated due to regulatory requirements on an EC2 instance or allocate a dedicated EC2 instance, where the underlying hardware is not shared across other AWS customers. For this, AWS has server migration services available that enable you to migrate servers, while keeping the exact same infrastructure footprint.

Scenario 3: Push Changes to Production Quickly

Let’s say you are running your cat photo app development environment in your laptop, using a Docker container. You want to run this development environment on AWS. You need to choose a service that removes the burden of maintaining and managing containers on AWS.

The workload has the following characteristics.

  • The environment often handles scaling bursts.
  • You and other developers working on the app understand Docker well.
  • You have a complicated and tedious deployment process that requires additional debugging.

Since your development environment is designed to run in a container, it makes sense to explore AWS Container services, such as Amazon EKS, Amazon ECS, or AWS Fargate. Running containers on AWS will help your environment scale quicker due to the reduced time they take to spin up. This time is further reduced when running in Fargate mode, which takes care of the underlying hardware capacity. 

This compute platform is also beneficial if you have a team of developers that understands the tool. Docker was created by developers for developers. While Docker helps reduce gaps between environments, systems administrators or security analysts may already be used to operating and maintaining fleets of servers using application provisioning tools while keeping security guardrails. If the development team responsible for the cat photos app is knowledgeable about Docker, it could be beneficial to run the application using container technologies.

Container debugging may also require extra configuration. If you have a complicated and tedious deployment process, Docker can help. However, if you have a simple app, it adds unnecessary complexity.

Choose Convenience or Control

There is no better or worse compute service, it’s all about the convenience you want contrasted with the amount of control you need. While having the maximum convenience is nice, it may not always be possible. Sometimes, control may be required to create a custom solution.

The word convenience, a bidirectional arrow, and the word control, depicting that all AWS compute services fall on either side.

To summarize:

  • If you want to use Amazon EC2, you will have full control over the operating system of your choice. However, this is a less convenient platform when compared to serverless options. This is because there’s still an operating system to maintain, which involves managing software patching, updates, OS patching and configuration, networking, scaling, and more.
  • If your application is designed to run in a container, use Amazon EKS, Amazon ECS, or AWS Fargate. These services are designed to remove the burden of maintaining and managing containers on AWS.
  • If your application requires little control over the operating system and you want to inherit scaling and resilience out of the box, use AWS Lambda.

Remember, these points are guidelines. Your workload will be unique to the app you’re building. So the solution that best meets your needs will look different than what’s outlined here.

Wrap Up

An essential aspect to consider when designing your workloads is what each compute option has to offer. Consider their pros and cons and try to plan ahead. You should not limit yourself only to using tools and resources you feel comfortable with. It is common to see applications that run on all three types of compute: AWS Lambda, ECS Clusters, and EC2 instances.

Before deciding which technology to use, research your project requirements, evaluate the time to market, and the effort needed to train everybody involved in the project in order to use that technology. In the end, the best tool is the tool mastered by your team! 

Resources

Comparta sus comentarios sobre Trailhead en la Ayuda de Salesforce.

Nos encantaría conocer su experiencia con Trailhead. Ahora puede acceder al nuevo formulario de comentarios cuando quiera desde el sitio de la Ayuda de Salesforce.

Más información Continuar para compartir comentarios