TNS
VOXPOP
Terraform or Bust?
Has your organization used or evaluated a Terraform alternative since new restrictions were placed on its licensing?
We have used Terraform, but are now piloting or using an open source alternative like OpenTofu.
0%
We never used Terraform, but have recently piloted or used alternatives.
0%
We don't use Terraform and don't plan to use or evaluate alternatives.
0%
We use Terraform and are satisfied with the results
0%
We are waiting to see what IBM will do with Terraform.
0%
Microservices / Networking

Part 2: The Best Way to Select a Proxy Architecture for Microservices Application Delivery

This article is the second in a series on selecting the right proxy architecture for the delivery of microservices-based applications. The first article  covered the importance of choosing the right proxy architecture, the evaluation criteria and an overview of four architecture choices. This article will take a deep dive into the 2-tier ingress proxy architecture.
Dec 18th, 2019 10:48am by
Featued image for: Part 2: The Best Way to Select a Proxy Architecture for Microservices Application Delivery
Feature image via Pixabay.

This article is the second in a series on selecting the right proxy architecture for the delivery of microservices-based applications. The first article covered the importance of choosing the right proxy architecture, the evaluation criteria and an overview of four architecture choices. This article will take a deep dive into the two-tier ingress proxy architecture.

Pankaj Gupta
Pankaj is senior director of cloud native application delivery solutions at Citrix. Pankaj advises customers for hybrid multicloud microservices application-delivery strategies. In prior roles at Cisco, he spearheaded strategic marketing initiatives for its networking, security and software portfolios. Pankaj is passionate about working with the DevOps community on best practices for microservices- and Kubernetes-based application delivery.

Our experience and observations indicate two-tier ingress proxy architecture is the quickest and simplest architecture for deploying applications in production for both cloud native novices and experts alike.

The two-tier ingress proxy architecture has two layers of application delivery controllers (ADCs) for north-south (N-S) traffic. The first, or green, ADC shown in the diagram is primarily used for L4 load balancing of inbound traffic, as well as for N-S traffic security functions, such as SSL termination and web application firewall (WAF). It is usually managed by the existing networking team members who are familiar with internet-facing traffic. This green ADC might be a NetScaler ADC or a similar product. The green ADC can also be used for L4-7 load balancing, SSL termination and WAF functions for other monolithic applications in use simultaneously.

netscaler-2-tier-ingress-proxy-architecture-part-2

The second ADC, shown in blue in the diagram, handles L7 load balancing for N-S traffic. It is managed by the platform team and is used within the Kubernetes cluster to direct traffic to the correct node. Layer 7 attributes, like information in the URL and HTTP headers, can be used for traffic load-balancing decisions. The blue ADC continuously receives updates about the availability and respective IP addresses of the microservices pods within the Kubernetes cluster and can make decisions about which pod is best able to handle the request. Deployed as a container inside the Kubernetes cluster, the blue ADC can be a NetScaler CPX (containerized ADC) or a similar product.

The east-west (E-W) traffic between microservices pods is managed by open source kube-proxy, which is a basic L4 load balancer with a very simple IP address-based round-robin or a least-connection algorithm. Kube-proxy lacks many advanced features like Layer 7 load balancing, security and observability, making it a blind spot for E-W traffic.

Let’s evaluate how the two-tier ingress proxy architecture meets the seven key criteria that matter to various stakeholders.

Application Security

Security is a mandatory requirement for all applications and at the top of everyone’s priority list. The green ADC should provide comprehensive security across Layer 3-7 for N-S traffic and most ADCs do that well. SSL termination is best done at the edge to allow for the inspection of encrypted traffic. Authentication can be applied at either the green or blue ADC. Kube-proxy provides very limited network policy and security support for inter-microservices E-W traffic. For E-W network policy and segmentation Project Calico or similar products need to be deployed. Bottom line: two-tier ingress provides excellent security for N-S traffic but is very limited for E-W traffic.

Observability

Observability is crucial for understanding what is happening in microservices environment. Having valuable insight will help to find issues and troubleshoot faster as well as enabling site reliability engineers (SREs) to investigate incidents to prevent future issues.

Both the green and blue ADCs provide excellent visibility for N-S traffic because all N-S traffic flows through them. They can report the telemetry to collectors for processing and subsequent insight and troubleshooting. Visibility into E-W traffic is highly restricted because kube-proxy has very limited telemetry capabilities. Bottom line: two-tier ingress offers excellent observability for N-S traffic but very limited observability for E-W traffic.

Continuous Deployment

To support advanced traffic management for automated canary deployment, progressive rollout, blue/green deployment and rollback, a two-tier ingress proxy architecture automates the traffic management between the different versions of a microservice for N-S traffic.

For N-S traffic, two-tier ingress proxy architecture has excellent capabilities for continuous deployment because it can integrate with CI/CD tools, such as Spinnaker and Jenkins X. For E-W traffic, kube-proxy offers only basic load balancing and lacks APIs to integrate with tools for continuous deployment. Bottom line: two-tier ingress is excellent for continuous deployment of N-S traffic but provides almost no capabilities for E-W traffic.

Scalability and Performance

The two-tier ingress architecture scales well for N-S traffic. Multiple N-S ADCs can be grouped together as a cluster in an active active formation to process requests in parallel. E-W traffic relies on kube-proxy, which provides three modes of deployment for userspace, iptables and IPVS. Because iptable mode has limited scalability, deploying kube-proxy in IPVS (IP virtual server) mode is recommended, which is designed for better scalability, better response time and lower CPU usage, but it adds complexity. In summary, two-tier ingress is excellent for N-S traffic scalability but merely good for E-W traffic.

Open Source Tools Integration

It is vital that ingress proxy architecture integrates with the tools IT uses and familiar with.

For N-S traffic, this is pretty simple. Most ADCs for north-south traffic will integrate with the popular open source tools, like Prometheus, Grafana, Spinnaker, Elasticsearch, Fluentd and Kibana for data collection, monitoring and analysis and CI/CD participation. E-W traffic is bound by the limitations of kube-proxy APIs for open source tools integration, which means it is harder to carry out the tasks needed for the inter-pod traffic. The two-tier ingress option thus offers excellent open source tools integration for N-S traffic and very limited integration options for E-W traffic.

Istio Support for Open Source Control Plane

As enterprises move toward a unified control plane, Istio, which was developed by Google and IBM in partnership with the Envoy team from Lyft, is emerging as a popular open source preference. Many ADCs for N-S traffic will integrate with Istio, but kube-proxy for E-W traffic does not currently integrate with Istio. Bottom line: Istio integration is supported for N-S traffic but not for E-W traffic.

Required IT Skillsets

Because two-tier ingress proxy architecture is bifurcated by design, it is easy to set specific demarcation points for control; the network team can own and manage the green ADC and the platform team can work inside the Kubernetes environment. Neither the network team nor the platform team needs to learn many new things. Both teams can continue doing what they know at their own speed while ensuring effective application delivery. It is thus our conclusion two-tier ingress is the simplest-and-fastest route to production for the vast majority of network and platform teams.

Unified Ingress Proxy Architecture: A Great Choice for Network-Savvy Platform Teams

Unified ingress is very similar to two-tier ingress proxy architecture, except that it unifies two tiers of application delivery controllers (ADCs or Proxys) for N-S traffic into one. Reducing an ADC tier effectively removes one hop of latency for N-S traffic.

netscaler-unified-ingress-proxy-architecture-part-2

Unified ingress has the same benefits and drawbacks as the two-tier ingress proxy architecture for security, observability, continuous deployment, scale and performance, open source tools support and Istio integration. Where it differs is in the skill sets required for implementation. With unified ingress, both the ADCs for N-S traffic and kube-proxy for the E-W traffic are managed by the platform team members, who must be very network savvy to implement and manage this type of architecture.

A unified ingress proxy architecture is capable of participating in the Kubernetes cluster’s overlay network. This allows it to communicate directly with the microservices pods. Therefore, the platform team has to be knowledgeable about layers 3-7 of the network stack to take full advantage of this architecture. Unified ingress is also suitable for internal employee-facing applications and gives an option to add a web application firewall (WAF), SSL termination and external customer-facing applications later.

Unified ingress proxy architecture is fairly simple to deploy compared to service mesh and offers excellent capabilities for N-S traffic. But it has very limited functionality for E-W traffic due to the limitations of kube-proxy and because it requires a network-savvy platform team to implement.

When it comes to proxy architecture, there are plenty of choices. In evaluating them, be sure to consider whether they provide the right level of security, observability, advanced traffic management and troubleshooting capabilities and complement your open source tools strategy. In doing so, we feel you’ll make the right choice for your organization.

The next article provides a deep dive on the service mesh architecture. Stay tuned.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma, Kubernetes.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.