Introduction
An Azure App Service Environment (ASE) is a premium Azure App Service hosting environment which is dedicated, fully isolated, and highly scalable. It clearly brings advanced features for hosting Azure App Services which might be required in different enterprise scenarios. But being this a premium service, it comes with a premium price tag. Due to its cost, a proper business case and justification are to be prepared before architecting a solution based on this interesting PaaS offering on Azure.
When planning to deploy Azure App Services, an organisation has the option of creating an Azure Service Plan and hosting them there. This might be good enough for most of the cases. However, when higher demands of scalability and security are present, a dedicated and fully isolated App Service Environment might be necessary.
Below, I will summarise the information required to make a decision regarding the need of using an App Service Environment for hosting App Services. Please, when reading this post, consider that facts and data provided are based on Microsoft documentation at the time of writing, which will eventually change.
App Service Environment Pricing.
To calculate the cost of an App Service Environment, we have to consider its architecture. An Azure App Service Environment is composed of two layers of dedicated compute resources and a reserved static IP. Additionally, it requires a Virtual Network. The Virtual Network is free of charge and reserved IP Addresses carry a nominal charge. So the cost is mostly related to the compute resources. The ASE is composed of one front-end compute resource pool, as well as one to three worker compute resource pools.
The minimum implementation of an App Service Environment requires 2 x Premium P2 instances for the Front-End Pool and 2 x Premium P1 instances for the Worker Pool 1, with a total cost per annum superior to $ 20,000 AUD. This cost can easily escalate by scaling up or scaling out the ASE.
Having said that, the value and benefits must be clear enough so that the business can justify this investment.
The benefits of an Azure App Service Environment.
To understand the benefits and advance features of an App Service Environment, we can compare what we get by deploying our Azure App Services on or without an App Service Environment, as show in the table below.
Without an App Service Environment | On an App Service Environment | |
Isolation Level | Compute resources are hosted on a multitenant environment. | All compute resources are fully isolated and dedicated exclusively to a single subscription. |
Compute resources specialisation | There is no out-of-the-box compute resource layer specialisation. | Compute resources on an ASE are grouped in 2 different layers: Front-End Pool and Worker Pools (up to 3).
The Front-End Pool is in charge of SSL termination and load balancing of app requests for the corresponding Worker Pools. Once the SSL has been off-loaded and the load balanced, the Worker Pool is in charge of processing all the logic of the App Services. The Front-End Pool is shared by all Worker Pools. |
Virtual Network (VNET) Integration | A Virtual Network can be created and App Services can be integrated to it.
The Virtual Network provides full control over IP address blocks, DNS settings, security policies, and route tables within the network. Classic “v1” and Resource Manager “v2” Virtual Networks can be used. |
An ASE is always deployed in a regional Virtual Network. This provides the ability to have access to resources in a VNET without any additional configuration required.
[UPDATE] Starting from mid-July 2016, ASEs now support “v2” ARM based virtual networks. |
[UPDATE July 2016] Accessible only via Site-to-Site or ExpressRoute VPN | App Services are accessible via public Internet. | [UPDATE July 2016] ASEs support an Internal Load Balancer (ILB) which allows you to host your intranet or LOB applications on Azure and access them only via a Site-to-Site or ExpressRoute VPN. |
Control over inbound and outbound traffic | Inbound and outbound traffic control is not currently supported. | An ASE is always deployed in a regional Virtual Network, thus inbound and outbound network traffic can be controlled using a network security group.
[UPDATE] With updates of mid-July 2016, now ASEs can be deployed into VNETs which use private address ranges. |
Connection to On-Prem Resources | Azure App Service Virtual Network integration provides the capability to access on-prem resources via a VPN over public Internet. | In addition to Azure App Service Virtual Network integration, the ASE provides the ability to connect to on-prem resources via ExpressRoute, which provides a faster and more reliable and secure connectivity without going over public Internet.
Note: ExpressRoute has its own pricing model. |
Inspecting inbound web traffic and blocking potential attacks | [UPDATE Sept – 2016] A Web Application Firewall (WAF) service is available to App Services through Application Gateway.
Application Gateway WAF has its own pricing model. |
ASEs provide the ability to configure a Web Application Firewall for inspecting inbound web traffic which can block SQL injections, cross-site scripting, malware uploads, application DDoS, and other attacks.
Note: Web Application Firewall has its own pricing model. |
Static IP Address | By default, Azure App Services get assigned virtual IP addresses. However, these are shared with other App Services in that region.
There is a way to give an Azure Web App a dedicated inbound static IP address. Nevertheless, there is no way to get a dedicated static outbound IP. Thus, an Azure App Service outbound IP cannot be securely whitelisted on on-prem or third-party firewalls. |
ASEs provides a static Inbound and Outbound IP Address for all resources contained within it.
App Services (Web App, Azure Web Jobs, API Apps, Mobile Apps and Logic Apps) can connect to third party application using a dedicated static outbound IP which can be whitelisted on on-prem or third-party firewalls. |
SLA | App Services provide an SLA of 99.95%. | App Services deployed on an ASE provide an SLA of 99.95%. |
Scalability / Scale-Up | App Services can be deployed on almost the full range of pricing tiers from Free to Premium.
However, Premium P4 is not available for App Services without an ASE. |
App Services deployed on an ASE can only be deployed on Premium instances, including Premium 4. (8 cores, 14 GB RAM, 500 GB Storage) |
Scalability / Scale-Out | App Services provisioned on a Standard App Service Plan can Scale-Out with up to 10 instances.
App Services provisioned on a Premium App Service Plan can Scale-Out with up to 20 instances. |
App Services deployed on an ASE can scale out with up to 50 instances.
An ASE can be configured to use up to 55 total compute resources. Of those 55, only 50 can be used to host workloads. |
Scalability / Auto Scale-Out | App Services can be scaled-out automatically. | App Services deployed on an ASE can be scaled-out automatically.
However an auto Scale-Out buffer is required. See section below. |
Points to consider
As seen above, Azure App Service Environments provide advanced features which might be necessary in enterprise applications. However, there are some additional considerations to bear in mind when architecting solutions to be deployed on these environments.
Without an App Service Environment | On an App Service Environment | |
Use of Front-End Pool | Azure App Service provides load-balancing out-of-the-box.
Thus, there is no need to have a Front-End Pool for load balancing. |
The Front-End Pool contains compute resources responsible for SSL termination and load balancing of app requests within an App Service Environment.
However, these compute resources cannot host workloads. So depending on your workload, the Front-End Pool, of at least 2 x Premium P2 instances, could be seen as an overhead. |
Fault-tolerance overhead | SLA is provided without requiring additional compute resources. | To provide fault tolerance, one or more additional compute resources have to be allocated per Worker Pool. This compute resource is not available to be assigned a workload. |
Auto Scale-Out buffer | Auto Scale-Out does not require a buffer. | Because Scale-Out operations in an App Service Environment take some time to apply, a buffer of compute resources is required to be able to respond to the demands of the App Service.
The size of the buffer is calculated using the Inflation Rate formula explained in detailed here. This means that the compute resources of the buffer are idle until a Scale-Out operation happens. In many cases this could be considered as an overhead. E.g. if auto Scale-Out is configured for an App Service (1 to 2 instances), when only one 1 instance is being used, there is an overhead of 2 compute resources. 1 for fault-tolerance (explained above) and 1 for Scale-Out buffer. |
Deployment | App Services can be deployed using Azure Resource Manager templates. | App Service Environments can be deployed using Azure Resource Manager templates. [UPDATE July 2016] And after the update, ASEs now support ARM VNETs (v2).
In addition, deploying an App Service Environment usually takes more than 3 hours. |
Conclusion
So coming back to original the question, when to use an App Service Environment? When is right to deploy App Services on an App Service Environment and to pay the premium price? In summary:
- When higher scalability is required. E.g. more than 20 instances per App Service Plan or larger instances like Premium P4 OR
- When inbound and outbound traffic control is required to secure the App Service OR
- When connecting the App Service to on-prem resources via a secure channel (ExpressRoute) without going by public Internet is necessary OR
- [Update July 2016] When access to the App Services has to be restricted to be only via a Site-to-Site or ExpressRoute VPN OR
- [Update Sept 2016]
When inspecting inbound web traffic and blocking potential attacks is needed without using Web Roles OR - When a static outbound IP Address for the App Service is required.
AND
- Very important, when there is enough business justification to pay for it (including potential overheads like Front-End Pool, fault-tolerance overhead, and auto Scale-Out buffer)
What else would you consider when deciding whether to use an App Service Environment for your workload or not? Feel free to post your comments or feedback!
Thanks for reading! 🙂
Great article, nice job explaining a complicated topic!
Thanks Pat! Good luck on your project!
Are you sure there’s no way to obtain an Static IP address for outbound traffic?
HI Miguel, do you mean getting a Static Outbound IP Address for an App Service without an ASE?
When you deploy an App Service (without an ASE), this is hosted on a multi-tenant platform. At the moment, this public multi-tenant hosting platform relies on a shared pool of outbound IP Addresses, thus it is technically impossible to get a static outbound IP address for your App Service.
Here you can find more info on how to know the shared pool of outbound IP Addresses your App Service might get:
https://social.msdn.microsoft.com/Forums/azure/en-US/fd53afb7-14b8-41ca-bfcb-305bdeea413e/maintenance-notice-upcoming-changes-to-increase-capacity-for-outbound-network-calls?forum=windowsazurewebsitespreview
As you can imagine, you could whitelist all possible IP Addresses you can get for your App Service, but that would leave a hole allowing other App Services hosted on the same multi-tenant hosting environment to connect to your protected resource.
There is a quite popular request to Microsoft to provide Static Outbound IP Address to Web Apps (or App Services), but the current response to this is having an ASE:
https://feedback.azure.com/forums/169385-web-apps-formerly-websites/suggestions/6428310-static-ip-addresses-inbound-and-outbound-for-a
HTH
did you get any solution to this problem?
I am also interested in knowing if we have any option for an Azure App Service outbound IP to be securely whitelisted on on-prem or third-party firewalls? Or is it only viable through Premium App Service Environment?
Hi St,
The status of outbound IP addresses of your Azure App Service is still the same.
Outbound IP Addresses for App Services are shared by many different tenants. In case you want to know the pool of possible outbound IP addresses you can get, here the details:
http://ruslany.net/2015/06/how-to-find-out-outbound-ip-addresses-used-by-azure-web-app/
If you need your own dedicated outbound IP for your app service, at the moment the only option available is to use an ASE.
HTH
Finally I could understand ASE clearly by reading your clear explanation.
I am still confused about how Front-End Pool works. I assume that there is a load balancer in the front of Front-End Pool because there are more than 1 instance in the Front-End Pool. Do you know if my understanding is correct or not?
And do you know what protocol is used between Front-End Pool and Worker Pool? HTTP/HTTPs or TCP? I did not find any docs talking about this.
Thanks,
Tony
Hi Tony, I’m glad you’ve found the post useful.
These are actually very good questions.
Because you can connect to your Worker Pool Instances via different protocols (e.g. HTTP/S, FTP/S and Visual Studio Debugging), I would assume the technology in front of your ASE is at DNS level, thus something similar to Azure Traffic Manager, which you get as part of the bundle. If I am right, then once traffic has been directed, clients then connect to those endpoints directly.
You can find networking and Inbound Traffic documentation here:
https://azure.microsoft.com/en-gb/documentation/articles/app-service-app-service-environment-network-architecture-overview/
https://azure.microsoft.com/en-gb/documentation/articles/app-service-app-service-environment-control-inbound-traffic/
The good news is that most of this is already abstracted for you and you don’t need to worry about most of it 🙂
HTH,
Reblogged this on Sprouting Bits and commented:
Reblogging this post to my personal archive 🙂
Thanks .Very Good Article summarizes very well
Have couple of questions
1> Can ASE support DR when deployed inside Vnet .I think public AppService provides DR automatically .
2>If we are using dedicated DB for app within Public Appservice , how DR works for Database tier
3> Do they support availability zones/sets
Hi Mahesh,
Thanks for your comments.
1) Your DR strategy should cover all the components of your solution, e.g. your web apps, databases, queues, etc. App Service Environment is only a private hosting environment for your app services (web apps). ASE provides redundancy at app service level, but this does not include your databases. The same is true for app services without an ASE. If desired, you could deploy your ASE across multiple regions to be covered in the case of a disaster at data center level, as mentioned here: https://azure.microsoft.com/en-us/documentation/articles/app-service-app-service-environment-geo-distributed-scale/
2) As mentioned above, Database DR is to be handled separately. If it’s a SQL on a VM, you can find information here, https://azure.microsoft.com/en-in/documentation/articles/virtual-machines-windows-sql-high-availability-dr/. If it’s SQL Azure (PaaS), you can read this https://azure.microsoft.com/en-us/documentation/articles/sql-database-disaster-recovery/
3) Bear in mind that App Services are PaaS and managed for your. You don’t need to worry about configuring availability sets for redundancy. App Services (with and without ASE) provide redundancy out of the box.
HTH 🙂
Do you need an ASE to use NSGs or does an App Service have that capability? Haven’t been able to find that via the portal.
Hi Neil,
App services not hosted on an ASE can be integrated into a VNET, which gives you access to other resources secured on that VNET and even connectivity to on-prem resources, but this does not grant private access to your web app from the virtual network. Private access is only available with an ASE configured with an Internal Load Balancer (ILB).
HTH
Very helpful article. Clear and concise !
Thanks.
Thanks Bhavin! Happy clouding 😉
Hi recently we realized that the web jobs option is greyed out in app service environment ….we are using an ILB with internal option to manage our own subdomain. Do yiu have any solution for this.appreciate your response on this.
Hi Rama, at the moment, I don’t have an ASE with ILB that I can play around with, so I am not sure this is an expected behaviour. Sorry about that. I would suggest to post this question in stackoverflow with the “azure-webjobs” tag. Usually MS people are actively monitoring and answering there. Cheers,
Hi, Can I update my Web App on the ASE as updating Web App of App Service(without an ASE) with swapping between staging and product slots?
Hi Peter, ASE supports deployment slots as well 🙂
Thanks for your response!
Could you give me some Information about swapping steps and so on? Either formal or informal are good.
I believe you should better rely on the official MS documentation 🙂
https://docs.microsoft.com/en-us/azure/app-service-web/web-sites-staged-publishing
Happy Continuous Delivering!
Thanks you very much!
The article is for App Service. But I will ask Official MS and have a try to swap on ASE.
Thanks again for your information!
This documentation should be relevant for deployments in an ASE as well 🙂
Hi, regarding statement in column “Without an App Service Environment”, where Application Gateway and WAF is supported. Is this with this setuo: https://blogs.msdn.microsoft.com/benjaminperkins/2014/05/05/how-to-get-a-static-ip-address-for-your-microsoft-azure-web-site/
Or does it work with all 4 inbound ip-addresses?
Hi ADBK, Sorry I couldn’t understand your question. Is it related to inspecting inbound traffic or to getting static IP?
The article you are referring to mentions how to get 1 static inbound IP. Without an ASE yo get a pool of 4 outbound IP addresses which are static, but shared across multiple tenants.
Hi Rama, I hope you still get my response. When you have an ASE with ILB, all management APIs for your App Service are pretty much locked down. When you access Azure Portal, the portal is calling intermediate APIs which are hosted in different Azure Datacentres, and these won’t have access to your App Service management APIs (e.g. kudu) because of the ILB. This is why some options are not available from the portal for ASEs with ILB. This is definitely a limit that I would argue has to be fixed.
I guess this must be obvious because it is not even mentioned, but I guess that an ASE can access an Azure SQL Database the same way a conventional app service does, but the ASE does not change the service level definitions or pricing of that database tier?
We are interested in Azure scalability and finding bottlnecks in the standard app services so I thought I’d like at the ASE, but we also have certain bottlenecks at the database level and sort of hoped ASE might address them too, but guess not.
An App within an ASE can access a SQL Azure database the same way any other App Service in a multi-tenant environment. SQL Azure database is not related to the ASE, thus using an ASE does not change the SLA or pricing of the SQL component. Additionally, SQL Azure cannot be deployed within your VNET.
HTH
Thanks. This is a bit of a long shot, but I’ll ask anyway. We currently seem to be having an issue that sure looks like Azure is throttling our data out, even though we’re operating a premium plan, though we are using standard app services. Microsoft says outbound bandwidth is not limited, but we’re seeing what we’re seeing.
Is there any limit to outbound (data sent) bandwidth to an ASE? Do you have any numbers of the amount of outbound data you have seen on a heavily loaded ASE? Thanks.
Sorry, your scenario is not still very clear to me. I might assume you have a Premium SQL Database and a Standard App Service Plan? How are you monitoring and realising about a potential outbound traffic throttling from the App Service Plan?
I have never experienced outbound data throttling from an ASE.
I guess a plan is not itself premium or non-premium, but we are using a premium (P4) database and standard (S3) app servers – seems little reason to pay triple for premium app servers … though now that I mention it, I suppose I should try benchmarking them too! First thing tomorrow. Should have thought of it earlier!
I’m using Blazemeter to simulate 1,000 users, we get the same throughput whether using 2, 4, or 8 app servers, and in each case it’s the requests that return larger data volumes that are most delayed. App Insights says the requests finish quickly, but the consumer sees them complete slowly. The numbers are modest, about 6 megabits/seconds, about 40 megabytes/minute, we figure we should see much higher numbers than that before hitting any system limits. The metrics blades on the Azure portal are hard to reconcile, each reports the data a little differently and they never quite say how, but after some fiddling they seem in pretty good agreement.
I’m waiting on an Azure service request to see what Microsoft says, so far they’ve missed their response SLA.
I don’t think you would get any improvements in performance moving from S3 to P3; while there is a considerable price difference. Have you run any profiler on your code to see where is the time being consumed? Here the link to App Insights Profiler https://github.com/Microsoft/ApplicationInsights-Home/blob/master/app-insights-profiler-preview.md
Bottlenecks could be in different components, like the SQL Db, queries, etc.
We are a bit off the post topic, but I HTH 🙂
FWIW I tried the P3 and got about 10% better performance but nothing more, something is still preventing us from scaling. Given the numbers, I’m a bit baffled actually just who or why anyone uses Px app servers – did I read above that it’s a requirement of ASE’s that you use premium?
P3 specs are pretty much the same as S3, you only get more local storage, which you might hardly use. What you get from Premium is really higher scalability. With S3 you can scale up to 10 instances, while with Premium you can scale up to 20 without ASE and 50 with ASE. All other benefits of the ASE are descibed in the post above 🙂
fwiw, I think I found our problem, and it wasn’t Azure after all. Still appreciate your post and discussion to better understand all these capacities.
UPDATE:
This post is based on the App Service Environment v1. Microsoft has recently announced App Service Environment v2 which changes many of the points discussed here.
More details of v2 here:
https://msdn.microsoft.com/en-us/magazine/mt797651
Is an App Service Environment the only way to get a Web App or Mobile App to use a VNET associated to an Express route?
If you want the WebApp or Mobile App to have connectivity to on-premises resources via Express Route, ASE is the only option available at the moment.
Great article, though it is mentioned that the multi tenanted webapp supports WAF via Application Gateway. I believe that’s not the case as Application Gateway supports ASE only, not webapps. https://docs.microsoft.com/en-sg/azure/application-gateway/application-gateway-introduction
Very useful article. I also have a similar question related to the Front-End Pool and the Worker Pool. So if I take an example of publish one application to the ASE what components goes where? I think the incoming traffic will be handled by the traffic manager / ILB and not by front end pool.
This needs more clarity if you could help with it. Thanks.
Thanks,
You can’t deploy anything to the Front End Pool. It is the Worker Pool which hosts your App Service. The Front End Pool is a layer seven-load balancer, distributing incoming traffic between the Apps and their respective Workers. The Front End Pool also hosts some management processes, which control part of the ‘PaaS magic’, and are fully abatracted for us.
If you are using ILB the virtual IP of the FE is only accessible from your VNet.
HTH.