Introducing the FME Server kubernetes tech preview
By deploying FME Server with Kubernetes, the most popular system for managing containers, large enterprises can easily maintain consistent FME Server deployments across the cloud. Here are some reasons why you might want to deploy FME Server with Kubernetes, along with resources on how it works and how to get started.
What is Kubernetes?
It all began with Docker and containerization, which make it easier for developers to create, deploy and run applications anywhere. Customers deploying FME Server using Docker have consistently cited:
- Increased resource efficiency: Compared to deploying on Virtual Machines, containers enable applications to run using fewer resources, lowering server and maintenance costs. Because containers only share the kernel (the core of the operating system) and come with their own environment, it’s safe to run multiple containers on one host—increasing efficiency while also increasing portability.
- Enables consistent environments: Docker makes it easier to ensure parity between all of your environments, e.g dev, staging, production. This means every team member is working on a production-equivalent environment, making it easier to analyze and fix bugs.
Docker, therefore, provides an easy and fast way for teams to release and manage applications. For many organizations hosting simpler applications, Docker is enough. However, if you are deploying and managing larger, more complex applications (or many applications that comprise multiple containers), it can be challenging using only Docker. This is why tools began to appear to orchestrate or manage all of the containers in operation.
One of the most popular container orchestration tools is Kubernetes. Developed by Google and now open source, Kubernetes builds on top of what containerization already provides and helps teams manage the lifecycle of their containers and how they interact—especially in large, dynamic environments. It provides a system to enable organizations to run, maintain, and upgrade scalable applications with minimal downtime.
Check out the Resource section at the end of the post to view the resources we read internally to ramp up on Kubernetes.
How do I Deploy FME Server with Kubernetes?
The documentation for the FME Server Kubernetes deployment walks you through getting set up and has links to the installation scripts.
Having just been to KubeCon (along with 8,000 other people), I can say, without doubt, that Kubernetes is now the standard for container orchestration. The growth is staggering and all major cloud providers now offer Kubernetes as a service:
- Amazon EKS
- Azure AKS
- Google Cloud Kubernetes Engine
We have successfully tested the deployment on all of these platforms. Check our current and evolving documentation on deploying into the different cloud environments and let us know if you have any questions. If you are new to all of this, we recommend starting with Google Cloud Kubernetes Engine, as it is the simplest and a good place to start.
Why deploy FME Server with Kubernetes?
The Kubernetes deployment of FME Server is suitable for larger organizations that have a dedicated operations team.
Automate Deployment
Kubernetes brings software development and operations together by design.
Kubernetes, with its declarative configuration, makes it easy to automate the deployment of complex large-scale applications. By doing this, Kubernetes shifts infrastructure to code, allowing for version control of entire infrastructure, which makes all environments (e.g. development, staging, production) easily testable and reproducible. It also makes a clear distinction between the operating system, application management, and application code. This separation of concerns makes it easier to create specialized teams focused on looking after each area.
Previously, when automating the deployment of FME Server to the cloud you would need to script your own logic using a combination of cloud-specific tooling (e.g. AWS CloudFormation), and configuration management (e.g. Chef) to configure the state of the machine. This worked, but the logic was not fully portable between cloud environments. Also, neither of these tools were particularly good at describing the dependencies and the attributes to help run a complex container-based application in a reliable and consistent manner. Kubernetes builds an abstraction layer on top of the cloud infrastructure, abstracting any cloud-specific settings into properties that you set.
What this means is you can now create an FME Server deployment that can be deployed with low effort and consistency into any cloud. Even better, Safe maintains this deployment and so you don’t need to worry about planning the architecture, installation, and eventually even upgrading.
Continuous Hotfixes
Patch FME Server with minimal impact to performance and availability.
Kubernetes, via Helm charts, enables an update path to be defined via configuration files. This means when the application needs security patches or minor upgrades, one command can be run that will figure out which parts need to be deployed and then do rolling updates of containers. Even better, if the upgrade does fail then it can rollback the application to the previous version. This allows application updates with minimal impact on performance and availability.
With regards to the FME Server Kubernetes deployment, the upgrade path is not yet tested. So while the helm command may work, you will likely experience mixed results if you try to update FME Server. As we look forward, I think there is an exciting future for FME Server and upgrades. If you have any feedback on this please get in touch.
Easier to deploy a highly available FME Server
Kubernetes was purposefully built to support fault-tolerant and highly available environments.
Currently, deploying a highly available FME Server into a cloud environment is a manual task with many components to install and coordinate. You need to worry about load balancers, networking, storage, provisioning infrastructure and it is different in each cloud provider.
With the FME Server Kubernetes deployment, there is only one deployment but it can be configured to be highly available or not depending on your needs.
For example, initially, you may run only one instance with all containers on a single host. This would not be that resilient as if the hardware fails the application will still go down. As the importance of the application increases, you could transition to a highly available deployment by adding more instances and/or then scaling certain components across multiple hosts. In this multi-host deployment, you don’t need to worry about adding more server instances: Kubernetes takes care of deciding where the components run. It also takes care of monitoring health checks on all components.
Even if you are not running a highly available deployment, Kubernetes helps you achieve a high level of uptime as it will automatically reschedule containers that go down which should reduce downtime to seconds in specific scenarios.
Performance & Scalability
Scale specific parts of FME Server to respond to different usage patterns.
- Kubernetes will load balance incoming requests across available Core/Web containers. If there is a spike in web traffic, then you can scale the core, web and ingress containers.
- If there is a spike in job requests and lots of jobs in the queue, Kubernetes makes it easy to scale up the FME Engines that are running by provisioning additional containers in the cluster.
Security
Kubernetes was built from the ground up with security in mind.
- Kubernetes is HTTPS only, either with your own certificate or using Let’s Encrypt. In the FME Server deployment, HTTPS is enabled via a self-signed certificate.
Resources
These are some of the resources we have used internally to ramp up on Kubernetes.
Kubernetes Tutorials
- Official Tutorial
- Advanced Tutorial – Maintained by Kelsey Hightower from Google. For in-depth understanding.
Best Kubernetes Talks
- Kubernetes Deconstructed – Kubernetes components explained one by one.
- The Easy–Don’t Drive Yourself Crazy–Way to Kubernetes Networking – Kubernetes Networking explained with a lot of simple graphs.
Helm Talks
- Helm charts from the ground up – Fairly basic talk about containers, clusters and helm with a lot of live demos. A good talk to get a feel on what helm is.
- Helm chart patterns – Talks from one of the helm maintainers with tips and tricks around building helm charts.