I recently was tasked with deploying two Fortinet FortiGate firewalls in Azure in a highly available active/active model. I quickly discovered that there is currently only two deployment types available in the Azure marketplace, a single VM deployment and a high availability deployment (which is an active/passive model and wasn’t what I was after).

FG NGFW Marketplace Options

I did some digging around on the Fortinet support sites and discovered that to you can achieve an active/active model in Azure using dual load balancers (a public and internal Azure load balancer) as indicated in this Fortinet document: https://www.fortinet.com/content/dam/fortinet/assets/deployment-guides/dg-fortigate-high-availability-azure.pdf.

Deployment

To achieve an active/active model you must deploy two separate FortiGate’s using the single VM deployment option and then deploy the Azure load balancers separately.

I will not be going through how to deploy the FortiGate’s and required VNets, subnets, route tables, etc. as that information can be found here on Fortinet’s support site: http://cookbook.fortinet.com/deploying-fortigate-azure/.

NOTE: When deploying each FortiGate ensure they are deployed into different frontend and backend subnets, otherwise the route tables will end up routing all traffic to one FortiGate.

Once you have two FortiGate’s, a public load balancer and an internal load balancer deployed in Azure you are ready to configure the FortiGate’s.

Configuration

NOTE: Before proceeding ensure you have configured static routes for all your Azure subnets on each FortiGate otherwise the FortiGate’s will not be able to route Azure traffic correctly.

Outbound traffic

To direct all internet traffic from Azure via the FortiGate’s will require some configuration on the Azure internal load balancer and a user defined route.

  1. Create a load balance rule with:
    • Port: 443
    • Backend Port: 443
    • Backend Pool:
      1. FortiGate #1
      2. FortiGate #2
    • Health probe: Health probe port (e.g. port 22)
    • Session Persistence: Client IP
    • Floating IP: Enabled
  2. Repeat step 1 for port 80 and any other ports you require
  3. Create an Azure route table with a default route to the Azure internal load balancer IP address
  4. Assign the route table to the required Azure subnets

IMPORTANT: In order for the load balance rules to work you must add a static route on each FortiGate for IP address: 168.63.129.16. This is required for the Azure health probe to communicate with the FortiGate’s and perform health checks.

FG Azure Health Probe Cfg

Once complete the outbound internet traffic flow will be as follows:

FG Internet Traffic Flow

Inbound traffic

To publish something like a web server to the internet using the FortiGate’s will require some configuration on the Azure public load balancer.

Let’s say I have a web server that resides on my Azure DMZ subnet that hosts a simple website on HTTPS/443. For this example the web server has IP address: 172.1.2.3.

  1. Add an additional public IP address to the Azure public load balancer (for this example let’s say the public IP address is: 40.1.2.3)
  2. Create a load balance rule with:
    • Frontend IP address: 40.1.2.3
    • Port: 443
    • Backend Port: 443
    • Backend Pool:
      1. FortiGate #1
      2. FortiGate #2
    • Session Persistence: Client IP
  3. On each FortiGate create a VIP address with:
    • External IP Address: 40.1.2.3
    • Mapped IP Address: 172.1.2.3
    • Port Forwarding: Enabled
    • External Port: 443
    • Mapped Port: 443

FG WebServer VIP Cfg

You can now create a policy on each FortiGate to allow HTTPS to the VIP you just created, HTTPS traffic will then be allowed to your web server.

For details on how to create policies/VIPs on FortiGate’s refer to the Fortinet support website: http://cookbook.fortinet.com.

Once complete the traffic flow to the web server will be as follows:

FG Web Traffic Flow

Category:
Azure Infrastructure, Azure Platform
Tags:
, , , , , , , , ,

Join the conversation! 5 Comments

  1. Hi, we had a simular deployment but run into a problem of the Public loadbalancer to SNAT the outboud traffic from the Fortigates randomly from a pool of public IP. Which was undesirable for email en whitelisted SaaS outbound application traffic. Just found out that we need to change a public loadbalancing option somewhere: https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/load-balancer/load-balancer-outbound-connections.md

    As in
    “loadBalancingRules”: [
    {
    “disableOutboundSnat”: false/true
    }
    ]

    Did somebody run into this and do you know where and how to do this?
    Thanks
    JP

    Reply
    • Hi Jan,
      We didn’t run into this issue but my understanding is to be able to utilise the disableOutboundSnat feature you need to be using the Azure Standard Public Load Balancer which is currently in preview (https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-standard-overview).
      You will then be able to create a load balance rule with the disableOutboundSnat feature. PowerShell command to create the rule can be found here: https://docs.microsoft.com/en-us/powershell/module/azurerm.network/new-azurermloadbalancerruleconfig?view=azurermps-4.4.1

      Hope that helps.

      Reply
      • Thanks,
        Usefull.
        But did you actually make it work? As we still are running into an issue which I think needs the UDR rewriting capabilities in the upcoming FOS 6.5.4.:
        ”The issue is the public ip addresses are natted/forwarded to an IP address on the external private range on the fortigate, which uses a Virtual IP to translate it to the correct internal hosts or internal loadbalancer frontend IP after which the internal loadbalancer moves the traffic to the required end system.

        When traffic OUTbound is started from the internal machines, it should point to the internal loadbalancer associated with the fortigate cluster, with a preference to the active firewall and not random selected!
        The outbound traffic passes through the Fortigate, runs to the external loadbalancers internal IP address (the default gateway of the fortigates) and should be source natted to the address used for the inbound traffic also, this only works if the fortigate sourcenats to the private ip address assigned to the public addresses!
        When using 2 fortigates for HA, they can NOT use the same incoming ip addresses on the public facing (WAN) interface as it causes duplicated addresses as this is an active/active setup) so the issue is the Incoming traffic when the primary fortigate fails, must be rerouted to different ip addresses which are configured on the second fortigate as VIP addresses.

        (example: azure pub address 50.10.1.1 -> 10.35.1.11 on fw1 wan1 range (10.35.1.x/24) Fortigate1 VIP 10.35.1.11 -> 10.31.2.70 (internal host), on FW2 we use 10.35.1.110 as VIP to point to 10.31.2.70 to avoid duplicate ip address issues.
        So Azure must detect FW1 to fail and then reroute the 50.10.1.1 to 10.35.1.110 in order to reroute through FW2!
        Outbound traffic hidden behind 10.35.1.110 must then be hidden behind 50.10.1.1. (in case FW1 is active that will be 10.35.1.11”

        Any thoughts?

      • Just an FYI the Azure standard load balancer is now GA and solves the NAT issue. I have switched over the load balancer and using a LBR with HA ports and floating IP enabled and all works correctly. Much more elegant solution than creating NAT Pools.

  2. aah I understand your issue and feel your pain as I went through the same thing. My suggestion if you can is to wait for the next OS upgrade as you mentioned.

    I did get it working in the end using a combination of NAT IP Pools and UDRs to control the traffic, it is kind of hard to explain but I’ll give it a shot.

    – Allocate two NAT IP pools one for each firewall (e.g. 10.35.5.0/25 -> FW1 and 10.35.5.128/25 -> FW2)

    – On each FortiGate add an IP Pool with a one to one mapping to the correct NAT IP Pool (e.g. FW1: 10.35.5.5 and FW2: 10.35.5.133)

    – On each FortiGate add a VIP to map the NAT IP Pool address to the destination server (e.g. FW1 VIP: 10.35.5.5 -> 10.31.2.70 and FW2 VIP: 10.35.5.133 -> 10.31.2.70)

    – Create UDRs for accessing the NAT IPs so that requests for FW1 NAT IP Pool direct to FW1 and request for FW2 NAT IP Pool direct to FW2, bypassing the internal load balancer (e.g. UDR-FW1: 10.35.5.0/25 -> FW1 Backend IP Address and UDR-FW2: 10.35.5.128/25 -> FW2 Backend IP Address)

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s

%d bloggers like this: