Options to consider for SharePoint Framework solutions deployment

There are various options to package and deploy a SharePoint Framework solution and as part of packaging and deployment process, the developers have to identify a best approach for their team. Sometimes it becomes a nightmare to plan the right approach for your solution, if you haven’t weighed the options properly.

Working at multiple implementations of SPFx solution for sometime now, I have been able to get an idea of various options and approach for them. Hence in this blog, we will at these options and look at merits and limitations for each.

At a high level, below are the main components that are deployed as part of SPFx package deployment:

  1. The minified js file for all code
  2. The webpart manifest file
  3. Webpart compiled file
  4. The package (.pckg) file with all package information

Deployment Options

Please check this blog for an overview of the steps for building and packaging a SPFx solution. The packaged solution (.sppkg) file can then be deployed to a App catalog site. The assets of the package (point 1-3 of above) could be deployed by any of the four below options. We will look at the merits and limitations for each.

1. Deploy to Azure CDN or Office 365 CDN

The assets could be deployed to an Azure CDN. The deployment script is already a part of SPFx solution and could be done from within the solution. More detailed steps for setting this up are here.

Note: Please remember to enable CORS on the storage account before deployment of the package.

If CORS is not enabled before CDN profile is used, you might have delete and recreate the Storage account.

Merits:

  • Easy deployment using gulp
  • Faster access to assets and web part resources because of CDN hosting
  • Add geographical restrictions (if needed)

Limitations:

  • Dependency on Azure Subscription
  • Proper set up steps required for setting up Azure CDN. In some cases if the CDN if not set properly, then the deployment has to be done again.
  • Since the assets are deployed to a CDN endpoint, so if assets need restricted access then this mayn’t be recommended

2. Deploy to Office 365 Public CDN

For this option, you will need to enable and set up Office 365 CDN in your tenancy before deployment. For more details of setting this up, check the link here.

Merits:

  • Faster access to assets and web part resources because of CDN hosting
  • No Azure subscription requirement
  • Content is accessible from SharePoint Interface

Limitations:

  • Manual copy of assets files to CDN enabled SharePoint library
  • Office 365 CDN is a tenant setting and has to be enabled for the whole tenancy
  • Since the assets are deployed to a CDN endpoint, so if assets needs restricted access then this mayn’t be recommended
  • Accidental deletion could cause issues

3. Deploy to SharePoint document library

This is also an option to copy for the compiled assets to be copied to a SharePoint document library anywhere in the tenancy. Setting this up is quite simple, first set the setting “includeClientSideAssets”: false in package-solution.json file and then set the CDN details in write-manifests.json  file.

Merits:

  • No need of additional Azure hosting or enabling Office 365 CDN
  • Access to Assets files from SharePoint interface

Limitations:

  • Manual Copy of assets file to SharePoint library
  • Accidental deletion could cause issues

4. Deploy to ClientAssets in App Catalog

From SPFx version 1.4, it is possible to include assets as part of the package file and deploy it to the hidden ClientAssets library residing in App Catalog. It is set in the package-solution.json file “includeClientSideAssets”: true.

Merits:

  • No extra steps needed to package and deploy assets

Limitations:

  • Increases the payload of the package file
  • Risk for Tenant admins to deploy script files to the Tenant App Catalog.

Conclusion

In this blog, we saw the various options for SPFx deployment, and merits and limitations of each approach.

Happy Coding !!!

 

Provisioning complex Modern Sites with Azure Functions and Flow – Part 3 – Post Provisioning Site Configuration

In the previous two blogs part 1 and part 2, we looked at steps to create a Modern team site and apply a custom provisioning template to it. In this blog, we will have a look at the steps for the post provisioning process to implement site specific requirements. Some of them could be:

1. Apply default values to list fields
2. Create a bunch of default folders
3. Manage Security groups (SP level) and permission level.
4. Navigation level changes
5. Add/Enable web parts or custom actions (SPFx extensions)

Most of the above steps are part of SharePoint Provisioning processes for a long time, just are less complex now with Provisioning templates doing a lot of heavy lifting. I will be writing very soon about PnP Templates, do and don’ts.

Pre-requisties

One key point to add is that, with Modern Team Sites and Office 365 Groups we cannot add AD security groups into a Office 365 Unified Group. For more information, please see this link.

The apply template process (link) takes about 45-90 min (long running process) for complex templates so wouldn’t be possible to start the post process on a flow wait state. Hence, we could trigger the post provisioning process on update of an inventory item or poll an endpoint for the status. In our case, we triggered the process when updating the inventory list with a status that apply template process is complete.

Post Provisioning Process

1. The first step of the Post Provisioning proces is to make sure that noscript is enabled on the team site (link) and all dependencies are ready such as Term stores, Navigation items, Site Pages, Content types, site columns etc. For a complex site template, this step will check for failure conditions to make sure that all artefacts are in place.

Note: The below sequence of steps could vary based on the solution 
and site structure in place but this was the faster way to 
isolate issues and ensure dependencies.

After the failure checks are done, we will start with the Site structure changes and navigation changes. For implementing navigation changes, check the blogs here and here.

2. Next, we will update any site specific site columns, site content types and permission level changes. A upcoming blog will have more details to this.

3. After that, we will update the changes for list structure, we will move to list specific updates such as default value setting, modifying list properties etc.

4. Next let’s move on to Apps and Site Pages updates. These changes take time because of SharePoint ALM lifecycle any possible duplications. So error handling is the key. Please check the blog here and for steps to deploy app and web parts here.

5. Before we finalize the site, let’s provision folders and metadata. This is not a simple process if you have to set metadata values for large number of folders like in our case 800 recursive folders (child in parent). So we will use the metadata file override. All the values in the defaults file have to be hardcoded before override.

Note: The metadata file override is not generally a good approach 
because of possible corruption that renders the library unusable 
so do error handling for all cases. For the CSOM approach check here.

6. Finally, we will set the site Property bag values for Site to set some of the tags making the site searchable. Here is the blog for the same.

Conclusion

The above we saw the final process of Site Provisioning process with setting up site properties and attributes for preparing the site before handing it off to business for use.

Hub-Spoke communication using vNet Peering and User Defined Routes

Introduction

Recently, I was working on a solution for a customer where they wanted to implement a Hub-Spoke virtual network topology that enabled the HUB to communicate with its Spoke networks via vNet Peering. They also required the SPOKE networks to be able to communicate with each other but peering between them was NOT allowed.
Drawing1
As we know, vNet peering is Non-Transitive – which means, even though SPOKE 1 is peered with the HUB network and the HUB is peered with SPOKE 2, this does not enable automatic communication between SPOKE 1 and SPOKE 2 unless they are exclusively peered which in our requirement we were not allowed to do.
So, let’s explore a couple of options on how we can enable communication between the Spoke networks without peering.

Solutions

There are several ways to implement Spoke to Spoke communication, but in this blog I’d like to provide details of the 2 feasible options that worked for us.
Option 1– is to place a Network Virtual Appliance (NVA) basically a Virtual Machine with a configured firewall/router within the HUB and configure it to forward traffic to and from the SPOKE networks.
If you search the Azure Market Place with the keywords “Network Virtual Appliance“, you will be presented with several licensed products that you could install and configure in the HUB network to establish this communication. Configuration of these virtual appliances varies and installation instructions can easily be found on their product websites.
Option 2- is to have a Virtual Network Gateway attached to the HUB network and make use of User Defined Routes, to enable communication between the SPOKES.
The above information was sourced from this very helpful blog post.
The rest of this blog is a detailed step by step guide and the testing performed for implementing the approach mentioned in Option 2.

Implementation

1.) Create 3 Virtual Networks with non-overlapping IP addresses

  • Log on to the Azure Portal and create the Hub Virtual Network as follows

1

  • Create the 2 additional virtual networks as the SPOKES with the following settings:

2

3

2.) Now that we have the 3 Virtual Networks provisioned, let’s start Peering them as follows:

a.) HubNetwork <> Spoke1Network

b.) HubNetwork <> Spoke2Network

  • Navigate to the Hub Virtual Network and create a new peering with the following settings:

4

Select the “Allow gateway transit” option.

  • Repeat the above step to create a peering with Spoke2Network as well.

3.) To establish a successful connection, we will have to create a peering to the HUB Virtual Network from each of the SPOKE Networks too

  • Navigate to Spoke1Network and create a new Peering

6

Notice, that when we select the “Use remote gateways” option, we get an error as we haven’t yet attached a Virtual Network Gateway to the HUB network. Once a Gateway has been attached, we will come back to re-configure this.

For now, Do Not select this option and click Create.

  • Repeat the above step for Spoke2 Virtual Network

4.) Let’s now provision a Virtual Network Gateway

  • Before provisioning a gateway, a Gateway Subnet is required within the Hub Virtual Network. To create this, click on the “Subnets” option in the blade of the Hub Virtual Network and then Click on “Gateway subnet

7

For the purpose of this demo, we will create a Gateway Subnet with the smallest possible network address space with CIDR /29 which provides us with 8 addresses of which the first and last IP are reserved for protocol conformance and x.x.x.1 – x.x.x.3 for azure services. For production environments, a Gateway Subnet with at least /27 address space is advised.

Let’s assume for now that when we provision the Virtual Network Gateway, the internal IP address it gets assigned to will be from the 4th address on wards which in our case would be 10.4.1.4

  • Provision the Virtual Network Gateway

Create a new Virtual Network Gateway with the following settings:

8

Ensure that you select the Hub Virtual Network in the Virtual network field which is where we want the Gateway to be attached. Click Create.

  • The Gateway provisioning process may take a while to complete and you will need to wait for the Updating status to disappear. It can take anywhere between 30-45 mins.

9

5.) Once the Gateway has been provisioned, lets now go back to the Peering section of each of the SPOKE Networks and configure “Use Remote gateways” option

10

  • Repeat the above step for Spoke2ToHub peering

6.) We will now create the Route Tables and define user routes needed for the SPOKE to SPOKE communication

  • Create 2 new Route tables in the portal with the following settings:

11

12

  • Define the User Routes as follows:

13

In the Address Prefix field, insert the CIDR Subnet address of the Spoke2 Virtual Network which in our case is 10.6.0.0/16

Select Next hop type as Virtual appliance and the Next hop address as the internal address of the Virtual Network Gateway. In our case, we are going to have this set as 10.4.1.4 as mentioned earlier.

  • Repeat this step to create a new Route in the Spoke2RouteTable as well by inserting the Subnet CIDR address of Spoke1 Virtual Network

7.) Let’s now associate these Route tables with our Virtual Networks

  • Navigate to the Spoke1Network and in the “Subnets” section of the blade, select the default subnet

14

In the Route table field select, Spoke1RouteTable and click Save

15

  • Repeat the above step to associate Spoke2RouteTable with the Spoke2 Virtual Network

We have now completed the required steps to ensure that both SPOKE Virtual Networks are able to communicate with each other via the HUB

Testing

  • In order to test our configurations, let’s provision a virtual machine in each of the Spoke networks and conduct a simple ping test

1.) Provision a basic Virtual Machine in each of the Spoke networks

2.) Run the following Powershell command in each VM to allow ICMP ping in the windows firewall as this port is blocked by default:

New-NetFirewallRule –DisplayName "Allow ICMPv4-In" –Protocol ICMPv4

3.) In my testing the VM’s had the following internal IP

The VM running in Spoke 1 network: 10.5.0.4

The VM running in Spoke 2 network: 10.6.0.4

16

Pinging 10.6.0.4 from 10.5.0.4 returns a successful response!

Provisioning complex Modern Sites with Azure Functions and Microsoft Flow – Part 1 – Architecture

In one of my previous blog here,  I have discussed about creating Office 365 groups using Azure Function and Flow. The same process could be used also to provision Modern Team sites in SharePoint Online because Modern Team Sites are Office 365 groups too. However, if you are creating a Complex Modern Team Site with lots of Libraries, Content types, Termstore associated columns etc. it will challenging to do it with a single Azure Function.
Thus, in this blog (part 1), we will look at the Architecture of a Solution to provision a complex Modern Team Site using multiple Azure Function and Flows. This is an approach that went through four months of validation and testing. There might be other options but this one worked for the complex team site which takes around 45-90 mins to provision.
Solution Design
To start with lets’ look at the solution design. The solution consists of two major components
1. Template Creation – Create a SharePoint Modern Team site to be used as a template and generate a Provisioning template from it
2. Provisioning Process – Create a SharePoint Inventory List to run the Flow and Azure Function. There will be three Azure Functions that will run three separate parts of the provisioning lifecycle. More details about the Azure Functions will in upcoming blog.
Get the Provisioning Template
The first step in the process is to  create a clean site that will be used as a reference template site for the Provisioning template. In this site, create all the lists, libraries, site columns, content type and set other necessary site settings.
In order to make sure that the generated template doesn’t have any elements which are not needed for provisioning, use the following PnP PowerShell cmdlet. The below cmdlet removes any content type hub association, ALM api handles and site security for provisioning requirements.

Get-PnPProvisioningTemplate -Out "" -ExcludeHandlers ApplicationLifecycleManagement, SiteSecurity -ExcludeContentTypesFromSyndication

The output of the above cmdlet is ProvisioningTemplate.xml file which could be applied to new sites for setting up the same SharePoint elements. To know more about the provisioning template file, schema and allowed tags, check the link here.
ModernSitesProvisioningFlow_GetTemplate
Team Site Provsioning Process
The second step in the process would be to create and apply the template to a Modern SharePoint Team site using Flow and Azure Function. The detail steps would be as follows:
1. Create an Inventory list to capture all the requirements for Site Creation
2. Create two flows
a) Create and Apply Template flow, and
b) Post Provisioning Flow
3. Create three Azure Functions –
a) Create a blank Modern Team Site
b) Apply Provisioning Template on the above site. This is a long running process and can take about 45-90 min for applying a complex template with about 20 libraries, 20-30 site columns and 10-15 content types
Note: Azure Functions on Consumption plan have a timeout of 10 min. Host the Azure function on an App Service Plan for the above to work without issues
c) Post Provisioning to apply changes that are not supported by Provisioning Template such as Creating default folders etc.
Below is the process flow for the provisioning process. It has steps from 1 – 11 which goes from creating the site to applying it. The brief list of the steps are as follows

  1. Call the Create Site flow to start the Provisioning Process
  2. Call the Create Site Azure Function
  3. Create the Modern Team Site in Azure Function and set any dependencies required for the Apply template such as Navigation items, pages etc, and then return to flow
  4. Call the Apply Template Azure Function.
  5. Get the previously generated ProvisioningTemplate.xml file from a shared location
  6. Apply the Template onto the newly created Modern site. Note: The flow call times out because it cannot wait for such a long running process
  7. Update the status column in the Site Directory for the post provisioning flow to start
  8. Call the Post provisioning flow to run the Post provisioning azure function
  9. The Post provisioning azure function will complete the remaining SharePoint changes which were not completed by the apply template such as, set field default values, create folders in any libraries, associate default values to taxonomy fields etc.

ModernSitesProvisioningFlow_ProvisioningProcess
Conclusion:
Hence in the above blog, we saw how to create a provisioning process to handle complex modern team site creation at a high architectural level. Next, we will deep dive into the Azure functions to create, apply template and post process in the next upcoming blogs.
Happy Coding!!!

Planning Site structure and Navigation in SharePoint Modern Experience Communication and Team sites

If you are planning to implement or implementing Modern team sites or Communication sites, there is change in best practices for planning and managing the Sites structure, Site Hierarchy and Navigation. This is a very common question during my presentations – how do we manage site structures, navigation and content in Modern experiences.
So, in this blog, we will look at few strategies for planning Site structure and Navigation in Modern Experience sites.
1. First and foremost, get rid of nested subsites and Site hierarchy navigation. Recently Microsoft has been pushing for Site Collections flat structure with Modern Team and Communication sites, which adds a lot of benefit for managing isolation and content. So, the new approach – Flat Site Collections and no Subsites. (There are various advantages of flat structure site collections which will be listed in another upcoming blog)
2. Secondly, to achieve a hierarchy relationship among sites such as Navigation, news items, search etc, use Hub Sites. Hub sites are the new way of connecting SharePoint site collections together. Besides, they have added advantage of aggregating information such as News and Search results from related hub associated sites. So, create a Hub site for Navigation across related sites.HubSiteAssociatedTeam
3. A best candidate for Hub sites, in my opinion, is Communication sites. Communication sites have a top navigation that can be easily extended for Hub sites. They are perfect for publishing information and showing aggregrated content. However, it also depends on if the Hub site is meant for a team and business unit or company as a whole. So, use Communication as a Hub site if targeting all company or a major group.QuickLaunchNestedCommunicationSite
4. One Navigation structure – Quick launch (Left hand) is Top Navigation for Communication sites. So no need to maintain two navigations. If you ask me, this a big win and removes a lot of confusion for end users.QuickLaunchEdit_CommSite
5. Quick launch for Modern Team and Communication Sites allows three level sub hierarchy which allows to define a nested custom hierarchy structure for Navigation which could be different from the content structure and site structure.
6. Focus on Content, not on Navigation or location of Content, through new Modern web parts such as Highlighted content, Quick links etc. which allow you to find content anywhere easily.HighlightedContent
7. Finally, few limitations of Modern Site Structure and Navigation (as of June 2018) for reference. Hopefully, this will be changed soon.

    • Permissions management still needs to be managed at each Site Collection, no nested structure there yet. Yet it is possible to use AD groups for consistent permissions set up
    • Office 365 Unified Security Groups cannot have AD or other Office 365 groups nested for Modern Team sites. But SharePoint site permissions could be managed through AD groups
    • Contextual Navigation bolding is missing in Hub sites i.e. if you click on the link to move to a child site then navigation is not automatically bolded, but this might be coming soon.
    • Navigation headers in Modern sites cannot be blank and needs to be set to a link

Conclusion:
Hence in this blog, we looked at an approach for Modern site structures, hierarchy and navigation.

Defining IT Strategy

Information Technology (IT) Strategy is a comprehensive plan that outlines how technology should be used to meet IT and business goals.

The following approach can be used to define your organisation’s IT Strategy.

Inputs:

  1. Organisational Business Priorities
  2. Organisational Key Behaviours
  3. How Business will be Supported by IT
  4. Technology Influences
  5. IT Strategic Principles
  6. IT Service Management Operating Principles

First of all, in order to define an IT Strategy, we need to obtain the above inputs (as much as possible). The approach to define the strategy is based on what are the business priorities and how the IT is going to shape to support the business goals. Those IT priorities will then become the strategy with key initiatives to support and achieve both IT and business goals.
Example Step 1: Organisational Business Priorities

  • Customers First
  • Efficient
  • People and Relationship
  • Secure Service
  • Outstanding Growth

Example Step 2: Organisational Key Behaviours

  • Accountable
  • Connected
  • Innovation
  • High Performance

Example Step 3: How the Business will be Supported IT

  • Keep the Business Running
  • Execute Business Change
  • Leadership
  • Adaptability

Example Step 4: Technology Influences

  • Technology Trends
  • Current Industry
  • Agile
  • Innovative Solutions
  • Adaptable Changes

Example Step 5: IT Strategic Principles

  • Put Business at First
  • Efficiency
  • People and Relationships

Example Step 6: IT Service Management Operating Principles

  • Provide Effective Service
  • Driven by Agility
  • Create Efficiency
  • Enforce Resilience
  • Value People

Outcome 1:

Example: IT Strategy

Strengthen service and customer focus via:

  • Improving our customer satisfaction
  • Creating a service delivery culture

Promote agility and flexibility through the services we offer by:

  • Investing in our BYO consumerisation
  • Increasing our utilisation of virtualisation technology
  • Investigating emerging technologies to support flexible workforce

Innovate efficiency and strengthen our partnership by:

  • Reducing lifecycle cost through Cloud program and other cost saving initiatives
  • Optimise operations and strategically invest in improvements

Improve service quality, reliability, and maintainability by:

  • Focusing on the stability & robustness of our systems
  • Improving the quality of our processes by driving quality upstream

Invest in people to grow and support by:

  • Creating a collaborative, proactive, outside-in culture

 
Strategy 1
Outcome 2:
The above strategy should have key initiatives that supports the strategy (supports both IT and business goals) and implementation/transformation roadmap.
Summary
Hope this is useful. This is one of the approaches that can be used to define your IT Strategy and key initiatives.

EU GDPR – is it relevant to Australian companies?

The new General Data Protection Regulation (GDPR) from the European Union (EU) imposes new rules on organisations that offer goods and services to the people in the EU, or collects and analyses data tied to EU residents, no matter where the organisations or the data processing is located. GDPR comes into force in May 2018.
If your customers reside in the EU, whether you have a presence in the EU or not, then GDPR applies to you. The internet lets you interact with customers where ever they are, and GDPR applies to anyone that deals with EU people where ever they are.
And the term personal data covers everything from IP address, to cookie data, to submitted forms, to CCTV and even to a photo of a landscape that can be tied to an identity. Then there is sensitive personal data, such as ethnicity, sexual orientation and genetic data, which have enhanced protections.
And for the first time there are very strong penalties for non-compliance – the maximum fine for a GDPR breach is EU$20M, or 4% of worldwide annual turnover. The maximum fine can be imposed for the most serious infringements e.g. not having sufficient customer consent to process data or violating the core of Privacy by Design concepts.
Essentially GDPR states that organisations must:

  • provide clear notice of data collection
  • outline the purpose the data is being used for
  • collect the data needed for that purpose
  • ensure that the data is only kept as long as required to process
  • disclose whether the data will be shared within or outside or the EU
  • protect personal data using appropriate security
  • individuals have the right to access, correct and erase their personal data, and to stop an organisation processing their data
  • and that organisations notify authorities of personal data breaches.

Specific criteria for companies required to comply are:

  • A presence in an EU country
  • No presence in the EU, but it processes personal data of European residents
  • More than 250 employees
  • Fewer than 250 employees but the processing it carries out is likely to result in a risk for the rights and freedoms of data subject, is not occasional, or includes certain types of sensitive personal data. That effectively means almost all companies.

What does this mean in real terms to common large companies? Well…

  • Apple turned over about USD$230B in 2017, so the maximum fine applicable to Apple would be USD$9.2B
  • CBA turned over AUD$26B in 2017 and so their maximum fine would “only” be AUD$1B
  • Telstra turned over AUD$28.2B in 2017, the maximum fine would be AUD$1.1B.

Ouch.
The GDPR legislation won’t impact Australian businesses, will it? What if an EU resident gets a Telstra phone or CBA credit/travel card whilst on holiday in Australia or if your organisation has local regulatory data retention requirements that appear, on the surface at least, at odds with GDPR obligations…
I would get legal advice if the organisation provides services that may be used by EU nationals.
In a recent PWC “Pulse Survey: US Companies ramping up General Data Protection Regulation (GDPR) budgets” 92% of responses stated that GDPR is one of several top priorities.
Technology cannot alone make an organisation GDPR compliant. There must be policy, process, people changes to support GDPR. But technology can greatly assist organisations that need to comply with GDPR.
Microsoft has invested in providing assistance to organisations impacted by GDPR.
Office 365 Advanced Data Governance enables you to intelligently manage your organisation’s data with classifications. The classifications can be applied automatically, for example, if there is GDPR German PII data present in the document the document can be marked as confidential when saved. With the document marked the data can be protected, whether that is to encrypt the file or assign permissions based on user IDs, or add watermarks indicating sensitivity.
An organisation can choose to encrypt their data at rest in Office 365, Dynamics 365 or Azure with their own encryption keys. Alternatively, a Microsoft generated key can be used.  Sounds like a no-brainer, all customers will use customer keys. However, the customer must have a HSM (Hardware Security Module) and a proven key management capability.
Azure Information Protection enables an organisation to track and control marked data. Distribution of data can be monitored, and access and access attempts logged. This information can allow an organisation to revoke access from an employee or partner if data is being shared without authorisation.
Azure Active Directory (AD) can provide risk-based conditional access controls – can the user credentials be found in public data breaches, is it an unmanaged device, are they trying to access a sensitive app, are they a privileged user or have they just completed an impossible trip (logged in five minutes ago from Australia, the current attempt is from somewhere that is a 12 hour flight away) – to assess the risk of the user and the risk of the session and based on that access can be provided, or request multi-factor authentication (MFA), or limit or deny access.
Microsoft Enterprise Mobility + Security (EMS) can protect your cloud and on-premises resources. Advanced behavioural analytics are the basis for identifying threats before data is compromised. Advanced Threat Analytics (ATA) detects abnormal behaviour and provides advanced threat detection for on-premises resources. Azure AD provides protection from identity-based attacks and cloud-based threat detection and Cloud App Security detects anomalies for cloud apps. Cloud App Security can detect what cloud apps are being used, as well as control access and can support compliance efforts with regulatory mandates such as Payment Card Industry (PCI), Health Insurance Accountability and Portability Act (HIPAA), Sarbanes-Oxley (SOX), General Data Protection Regulation (GDPR) and others. Cloud App Security can apply policies to apps from Microsoft or other vendors, such as Box, Dropbox, Salesforce, and more.
Microsoft provides a set of compliance and security tools to help organisations meet their regulatory obligations. To reiterate policy, process and people changes are required to support GDPR.
Please discuss your legal obligations with a legal professional to clarify any obligations that the EU GDPR may place on your organisation. Remember May 2018 is only a few months away.

Azure ARM architecture pattern: a DMZ design with a firewall appliance

Im in the process of putting together a new Azure design for a client. As always in Azure, the network components form the core of the design. There was a couple of key requirements that needed to be addressed that the existing environment had outgrown: lack of any layer 7 edge heightened security controls and a lack of a DMZ.

I was going through some designs that I’ve previously done and was checking the Microsoft literature on what some fresh design patterns might look like, in case anythings changed in recent times. There is still only a single reference on the Microsoft Azure documentation and it still references ASM and not ARM.

For me then, it seems that the existing pattern I’ve used is still valid. Therefore, I thought I’d share what that architecture would look like via this quick blog post.

My DMZ with a firewall appliance design


Here’s an overview of some key points on the design:

  • Firewall appliances will have multiple interfaces, but, typically will have 2 that we are mostly concerned about: an internal interface and an external interface
  • Network interfaces in ARM are now independent objects from compute resources
    • As such, an interface needs to be associated with a subnet
    • Both the internal and external interfaces could in theory be associated with the same subnet, but, thats a design for another blog post some day
  • My DMZ design features two subnets in two zones
    • Zone 1 = “Untrusted”
    • Zone 2 = “Semi-trusted”
    • Zone 3 = “Trusted” – this is for reference purposes only so you get the overall picture
  • Simple subnet design
    • Subnet 1 = “External DMZ”
    • Subnet 2 = “Internal DMZ”
    • Subnet 3 = Trusted
  • Network Security Groups (NSGs) are also used to encapsulate the DMZ subnets and to isolate traffic from the VNET
  • Through this topology there are effectively three layers of firewall between the untrusted zone and the trusted zone
    • External DMZ NSG, firewall appliance and Internal DMZ NSG
  • With two DMZ subnets (external DMZ and internal DMZ), there are two scenarios for deploying DMZ workloads
    • External DMZ = workloads that do not require heightened security controls by way of the firewall
      • Workload example: proxy server, jump host, additional firewall appliance management or monitoring interfaces
      • Alternatively, this subnet does not need to be used for anything other than the firewall edge interface
    • Internal DMZ = workloads that required heightened security controls and also (by way of best practice) shouldn’t be deployed in the trusted zone
      • Workload example: front end web server, Windows WAP server, non domain joined workloads
  • Using the firewall as an edge device requires route tables to force all traffic leaving a subnet to be directed to the firewall internal interface
    • This applies to the internal DMZ subnet and the trusted subnet
    • The External DMZ subnet does not have a route table configured so that the firewall is able to route out to Azure internet and the greater internet
  • Through VNET peering, other VNETs and subnets could also route out to Azure internet (and the greater WWW) via this firewall appliance – again, leverage route tables

Cheers 🙂

Azure ARM architecture pattern: the correct way to deploy a DMZ with NSGs

Isolating any subnet in Azure can effectively create a DMZ. To do this correctly though is certainly something that is super easy, but, something that can easily be done incorrectly.
Firstly, all that is required is a NSG and associating that with any given subnet (caveat- remember that NSGs are not compatible with the GatewaySubnet). Doing this will deny most traffic to and from that subnet- mostly relating to the tag “internet”. What is easily missed is not applying a deny all rule set in both the inbound and outbound rules of the NSG itself.
Ive seen some clients that have put an NSG on a subnet and assumed that subnet was protected. Unfortunately, thats not correct.
Ive seen some clients that have put a deny all inbound from the internet and vice versa deny all outbound to the internet and assumed that the subnet was protected and isolated. Unfortunately, thats also not correct.

How to correctly isolate a subnet to create a DMZ

Azure has 3 default rules that apply to an NSG.
These rules are:

To view these default rules, at the top of the NSG inbound or outbound rules, select this button:

With these 3 rules, there’s the higher in order two rules that can trip people up. The main culprit being rule 65000 means that any other subnet in your VNET and any other VNET that is peered with your VNET is allowed to communicate with your given subnet.
To correctly isolate a subnet in a VNET, we need to create new rule (I always do the lowest available rule priority number of 4096) for both inbound and outbound, to deny * ports on * IP’s or subnets that will override these default rules. Azure NSGs work by way of precedence. The higher the rule priority number, the higher…. um… you guess it: the priority when processing the rules (much like any other firewall or network appliance vendor ACL). This 4096 deny rule should look something like this:

You’ll also notice two warnings when you’ve attempted to save that rule which should explain in better and more succinct English my earlier definition of what happens:

  • Warning #1 – This rule denies traffic from AzureLoadBalancer and may affect virtual machine connectivity. To allow access, add an inbound rule with higher priority to allow AzureLoadBalancer to VirtualNetwork.
  • Warning #2 – This rule denies virtual network access. If you wish to allow access to your virtual network, add an inbound rule with higher priority to Allow VirtualNetwork to VirtualNetwork.

With that, the Azure load balancer and VNET traffic from other subnets within the same VNET or other VNETs through peering will be denied. Thus, we have a true isolated subnet and one that can be setup as a DMZ.
Cheers!

BONUS

Before I go, I just wanted to quickly mention that in Azure NSGs are also able to be applied to a single network interface.
If you flip the methodology there, its quite easily possible to have no NSGs on any subnets, but rather, apply those NSGs on every interface associated with servers and instances in a VNET. There is a key drawback though with this approach- administrative overhead.
With an NSG associated with a server instance, again following my correct deny all rule mentioned earlier, a single NIC or a single server instance could be isolated in a sudo DMZ. The challenge then is, if this process is repeated across 100 servers, keeping all those NSGs up to date and replicating rules when servers need to communicate on various ports or protocols. Administrative overhead indeed!


Original posted on Lucian.Blog. Follow Lucian on Twitter @LucianFrango.

Writing for the Web – that includes your company intranet!

You can have a pool made out of gold – if the water in it is as dirty and old as a swamp- no one will swim in it!

The same can be said about the content of an intranet. You can have the best design, the best developers and the most carefully planned out navigation and taxonomy but if the content and documents are outdated and hard to read, staff will lose confidence in its authority and relevance and start to look elsewhere – or use it as an excuse to get a coffee.
The content of an intranet is usually left to a representative from each department (if you’re lucky!) Usually people that have been working in a company for years. Or worse yet, to the IT guy. They are going to use very different language to a new starter, or to a book keeper, or the CEO. Often content is written for an intranet “because it has to be there” or to “cover ourselves” or because “the big boss said so” with no real thought into how easy it is to read or who will be reading it.
adaptive
Content on the internet has changed and adapted to meet a need that a user has and to find it as quickly as possible. Why isn’t the same attitude used for your company? If your workers weren’t so frustrated finding the information they need to do their job, maybe they’d perform better, maybe that would result in faster sales, maybe investing in the products your staff use is just as important as the products your consumers use.
I’m not saying that you have to employ a copywriter for your intranet but at least train the staff you nominate to be custodians of your land (your intranet- your baby).
Below are some tips for your nominated content authors.

The way people read has changed

People read differently thanks to the web. They don’t read. They skim.

  • They don’t like to feel passive
  • They’re reluctant to invest too much time at one site or your intranet
  • They don’t want to work for the information

People DON’T READ MORE because

  • What they find isn’t relevant to what they need or want.
  • They’re trying to answer a question and they only want the answer.
  • They’re trying to do a task and they only want what’s necessary.

Before you write, identify your audience

People come to an intranet page with a specific task in mind. When developing your pages content, keep your users’ tasks in mind and write to ensure you are helping them accomplish those tasks.  If your page doesn’t help them complete that task, they’ll leave (or call your department!)
Ask these questions

  • Who are they? New starters? Experienced staff? Both? What is the lowest common denominator?
  • Where are they? At work? At home? On the train? (Desktop, mobile, laptop, ipad)
  • What do they want?
  • How educated are they? Are they familiar with business Jargon?

Identify the purpose of your text

As an intranet, the main purpose is to inform and educate. Not so much to entertain or sell.
When writing to present information ensure:

  • Consistency
  • Objectivity
  • Consider tables, diagrams or graphs

Structuring your content

Headings and Sub headings

Use headings and sub headings for each new topic. This provides context and informs readers about what is to come. It provides a bridge between chunks of content.

Sentences and Paragraphs

Use short sentences. And use only 1-3 sentences per paragraph.
‘Front Load’ your sentences. Position key information at the front of sentences and paragraphs.
Chunk your text. Break blocks of text into smaller chunks. Each chunk should address a single concept. Chunks should be self-contained and context-independent.

Layering vs scrolling

It is OK if a page scrolls. It just depends how you break up your page! Users’ habits have changed in the past 10 years due to mobile devices, scrolling is not a dirty word, as long as the user knows there’s more content on the page by using visual cues.
phone.png

Use lists to improve comprehension and retention

  • Bullets for list items that have no logical order
  • Numbered lists for items that have a logical sequence
  • Avoid the lonely bullet point
  • Avoid death by bullet point

General Writing tips

  • Write in plain English
  • Use personal pronouns. Don’t say “Company XYZ prefers you do this” Say “We prefer this”
  • Make your point quickly
  • Reduce print copy – aim for 50% less copy than what you’d write for print
  • Be objective and don’t exaggerate
  • USE WHITE SPACE – this makes content easier to scan, and it is more obvious to the eye that content is broken up into chunks.
  • Avoid jargon
  • Don’t use inflated language

Hyperlinks

  • Avoid explicit link expressions (eg. Click here)
  • Describe the information readers will find when they follow the link
  • Use VERBS (doing word) as links.
  • Warn users of a large file size before they start downloading
  • Use links to remove secondary information from the bulk of the text (layer the content)

Remove

  • Empty words and phrases
  • Long words or phrases that could be shorter
  • Unnecessary jargon and acronyms
  • Repetitive words or phrases
  • Adverbs (e.g., quite, really, basically, generally, etc.)

Avoid Fluff

  • Don’t pad write with unnecessary sentences
  • Stick to the facts
  • Use objective language
  • Avoid adjectives, adverbs, buzzwords and unsubstantiated claims

Tips for proofreading

  1. Give it a rest
  2. Look for one type of problem at a time
  3. Double-check facts, figures, dates, addresses, and proper names
  4. Review a hard copy
  5. Read your text aloud
  6. Use a spellchecker
  7. Trust your dictionary
  8. Read your text backwards
  9. Create your own proofreading checklist
  10. Ask for help!

A Useful App

Hemingwayapp.com assesses how good your content is for the web.

A few examples (from a travel page)

Bad Example

Our Approved an​​​d Preferred Providers

Company XYZ has contracted arrangements with a number of providers for travel.  These arrangements have been established on the basis of extracting best value by aggregating spend across all of Company XYZ.

Why it’s Bad

Use personal pronouns such as we and you, so the user knows you are talking to them. They know where they work. Remove Fluff.

Better Example

Our Approved an​​​d Preferred Providers

We have contracted arrangements with a number of providers for travel to provide best value

Bad Example

Travel consultant:  XYZ Travel Solutions is the approved provider of travel consultant services and must be used to make all business travel bookings.  All airfare, hotel and car rental bookings must be made through FCM Travel Solutions

Why it’s bad

The author is saying the same thing twice in two different ways. This can easily be said in one sentence.

Better Example

Travel consultant

XYZ Travel Solutions must be used to make all airfare, hotel and car rental bookings.

Bad Example

Qantas is Company XYZ preferred airline for both domestic and international air travel and must be used where it services the route and the “lowest logical fare” differential to another airline is less than $50 for domestic travel and less than $400 for international travel

Why it’s bad

This sentence is too long. This is a case of using too much jargon. What does lowest logical fare even mean? And the second part does not make any sense. What exactly are they trying to say here? I am not entirely sure, but if my guess is correct it should read something like below.

Better Example

Qantas is our preferred airline for both domestic and international air travel. When flying, choose the cheapest rate available within reason. You can only choose another airline if it is cheaper by $50 for domestic and cheaper by $400 for international travel.

Bad Example

Ground transportation:  Company XYZ preferred provider for rental vehicle services is Avis.  Please refer to the list of approved rental vehicle types in the “Relevant Documents” link to the right hand side of this page.

Why it’s bad

Front load your sentences. With the most important information first. Don’t make a user dig for a document, have the relevant document right there. Link the Verb. Don’t say CLICK HERE!

Better Example

Ground transportation

Avis is our preferred provider to rent vehicles.
View our list of approved rental vehicles.

Bad Example

Booking lead times:  To ensure that the best airfare and hotel rate can be obtained, domestic travel bookings should be made between 14-21 days prior to travel, and international travel bookings between 21 and 42 days prior to travel.  For international bookings, also consider lead times for any visas that may need to be obtained.

Why it’s bad

Front load your sentence… most important information first. This is a good opportunity to chunk your text.

Better Example

Booking lead times

Ensure your book your travel early
14-21 day prior to travel for domestic
21-42 days prior to travel for internatonal (also consider lead times for visas)
This will ensure that the best airfare and hotel rate can be obtained.

 

Follow ...+

Kloud Blog - Follow