Brisbane O365 Saturday

On the weekend I had a pleasure of presenting to the O365 Saturday Brisbane event. Link below

In my presentation I demonstrated a new feature within Azure AD that allows the automatic assigment of licences to any of your Azure subscriptions using Dynamic Groups. So what’s cool about this feature?

Well, if you have a well established organisational structure within your on-premise AD and you are synchronising any of the attributes that you need to identity this structure, then you can have your users automatically assigned licences based on their job type, department or even location. The neat thing about this is it drastically simplifies the management of your licence allocation, which until now has been largely done through complicated scripting processes either through an enterprise IAM system, or just through the service desk when users are being setup for the first time.

You can view or download the presentation by clicking on the following link.

O365 Saturday Automate Azure Licencing

Throughout the Kloud Blog there is an enormous amount of material featuring the new innovative way to use Azure AD to advance your business, and if you would like a more detailed post on this topic then please place a comment below and I’ll put something together.


Resolving Microsoft Identity Manager “sync-rule-validation-parsing-error” error

A couple of weeks back I inherited a Microsoft Identity Manager development environment that wasn’t quite complete. When I performed a sync on a user object I got the following error;  sync-rule-validation-parsing-error

Looking into the error for further details, Details and Stack Trace were both greyed out as shown below.

I looked at the object being exported on the MA and the awaiting export details and found slightly different information. The error was CS to MV to CS synchronization failed 0x8023055a 

Still not a lot to go on. So I looked in the Application Event Log and nothing. Anything in the System Event Log? No, nothing.

So my attention turned to the Export Synchronization Rule. Here is a partial screenshot of the Export Sync Rule. The object (user) in question had been flagged as inactive and the intent appeared to be a clearing of a number of attributes. Sending “” (crude empty/null) to an attribute isn’t very elegant.

I changed each to use the null function. So for export, null() will flow to each of the attributes. I tried the export again and the same error and problem resulted.

Running short on ideas I created a brand new Export Synchronization Rule and replicated the configuration except for the attributes being exported. Then I added one attribute into the rule at a time, tested the export and repeated until I could replicate the error.

I was able to replicate the error once I hit the terminalServer attribute.
*Note: the screenshot below is prior to changing over to flow null() instead of “”.

Sending null() to the terminalServer Active Directory attribute was causing the error. It was at this point I actually just removed that flow rule and continued with other tasks.

Coming back to this later, and thinking it through I understand the error. When dealing with Terminal Services you actually normally manage four attributes that are part of the userParameters attribute. The four attributes that define a users Terminal Services Profile are;

  • allowLogon
  • terminalServicesHomeDirectory
  • terminalServicesProfilePath
  • terminalServicesHomeDrive

For a user that has a fully configured set of Terminal Services attributes, sending null() to the terminalServer attribute isn’t going to work.

So, posting this as I couldn’t find any reference to sync-rule-validation-parsing-error or CS to MV to CS synchronization failed 0x8023055a elsewhere and chances are I’ll come across it again, and it’ll probably help someone else too.

Getting started with Azure Cloud Shell

A few weeks back I noticed that I now had the option for the Azure Cloud Shell in the Azure Portal.

What is Azure Cloud Shell?

Essentially rather than having the Azure CLI installed on your local workstation, you can now initiate it from the Portal and you have automatically assigned (initiated as part of the setup) 5Gbytes of storage associated with it. So you can now create, manage and delete Azure resources using a centrally hosted CLI session. Each time you start your shell your homedrive will mount and your profile, scripts and whatever else you’ve stored in it will be available to you. Nice. Let’s do it.

Getting Started

Login to the Azure Portal and click on the Cloud Shell icon.

As this is the first time you’ve accessed it, you will not have any storage associated with your Azure Cloud Shell. You will be prompted for storage information.

Azure Files must reside in the same region as the machine being mounted to. Cloud Shell machines currently (July 2017) exist in the below regions:

Area Region
Americas East US, South Central US, West US
Europe North Europe, West Europe
Asia Pacific India Central, Southeast Asia

I hit the Advanced Settings to specify creation of a new Resource Group, Storage Account and File Share.

The UI doesn’t check for uniqueness of the configuration settings until it is written. So you might need a couple of attempts with the naming of your storage account. As you can see below it isn’t surprising that my attempt to use azcloudshell as a “Storage Account Name” was already taken.

Providing unique values for these options

.. let the initial creation go through just nicely. I now had a homedrive created for my profile and any files I create, store for my sessions.

As for commands you can use with the Azure CLI go have a look here for the full list that you can use to create, manage and delete your Azure resources.

Personally I’m currently doing a lot with Azure Functions. A list of the full range of Azure Functions CLI commands is available here.

The next thing I looked to do was to put my scripts etc into the clouddrive. I just navigated to the new StorageAccount that I created as part of this and uploaded via the browser.

Below you can see the file I uploaded on the right which appears in the directory in the middle pane.

Using the Azure CLI I changed directories and could see my uploaded file.

And that is pretty much it. Continue as you would with the CLI, but just now with it all centrally stored. Sweet.

Deploy a PHP site to Azure Web Apps using Dropbox


I’ve been having some good fun getting into the nitty gritty of Azure’s Open Source support and keep coming across some amazing things.

If you want to move away from those legacy hosting businesses and want a simple method to deploy static or dynamic websites, then this is worth a look.

The sample PHP site I used for this demonstration can be cloned on Github here:

The video is without sound, but should be easy enough to follow without.

It’s so simple even your dog could do it.


View original post

Social Engineering Is A Threat To Your Organisation

Of the many attacks, hacks and exploits perpetrated against organisations. One of the most common vulnerabilities businesses face and need to guard against is the result of the general goodness or weakness, depending on how you choose to look at it, of our human natures exploited through means of social engineering.

Social engineering is a very common problem in cyber security. It consists of the simple act of getting an individual to unwittingly perform an unsanctioned or undersirable action under false pretenses. Whether granting access to a system, clicking a poisoned link, revealing sensitive information or any other improperly authorised action. The act relies on the trusting nature of human beings, their drive to help and work with one another. All of which makes social engineering hard to defend against and detect.

Some of the better known forms of social engineering include:


Phishing is a technique of fraudulently obtaining private information. Typically, the phisher sends an e-mail that appears to come from a legitimate business—a bank, or credit card company—requesting “verification” of information and warning of some dire consequence if it is not provided. The e-mail usually contains a link to a fraudulent web page that seems legitimate—with company logos and content—and has a form requesting everything from a home address to an ATM card’s PIN or a credit card number. [Wikipedia]


An attacker, seeking entry to a restricted area secured by unattended, electronic access control, e.g. by RFID card, simply walks in behind a person who has legitimate access. Following common courtesy, the legitimate person will usually hold the door open for the attacker or the attackers themselves may ask the employee to hold it open for them. The legitimate person may fail to ask for identification for any of several reasons, or may accept an assertion that the attacker has forgotten or lost the appropriate identity token. The attacker may also fake the action of presenting an identity token. [Wikipedia]


Baiting is like the real-world Trojan horse that uses physical media and relies on the curiosity or greed of the victim. In this attack, attackers leave malware-infected floppy disks, CD-ROMs, or USB flash drives in locations people will find them (bathrooms, elevators, sidewalks, parking lots, etc.), give them legitimate and curiosity-piquing labels, and waits for victims. For example, an attacker may create a disk featuring a corporate logo, available from the target’s website, and label it “Executive Salary Summary Q2 2012”. The attacker then leaves the disk on the floor of an elevator or somewhere in the lobby of the target company. An unknowing employee may find it and insert the disk into a computer to satisfy his or her curiosity, or a good Samaritan may find it and return it to the company. In any case, just inserting the disk into a computer installs malware, giving attackers access to the victim’s PC and, perhaps, the target company’s internal computer network. [Wikipedia]

Water holing

Water holing is a targeted social engineering strategy that capitalizes on the trust users have in websites they regularly visit. The victim feels safe to do things they would not do in a different situation. A wary person might, for example, purposefully avoid clicking a link in an unsolicited email, but the same person would not hesitate to follow a link on a website he or she often visits. So, the attacker prepares a trap for the unwary prey at a favored watering hole. This strategy has been successfully used to gain access to some (supposedly) very secure systems. [Wikipedia]

Quid pro quo

Quid pro quo means something for something. An attacker calls random numbers at a company, claiming to be calling back from technical support. Eventually this person will hit someone with a legitimate problem, grateful that someone is calling back to help them. The attacker will “help” solve the problem and, in the process, have the user type commands that give the attacker access or launch malware. [Wikipedia]

Now do something about it!

As threats to orginisation’s cyber security go. Social engineering is a significant and prevalent threat, and not to be under-estimated.

However the following are some of the more effective means of guarding against it.

  1. Be vigilent…
  2. Be vigilent over the phone, through email and online.
  3. Be healthily skeptical and aware of your surroundings.
  4. Always validate the requestor’s identity before considering their request.
  5. Validate the request against another member of staff if necessary.

Means of mitigating social engineering attacks:

  1. Use different logins for all resources.
  2. Use multi-factor authentication for all sensitive resources.
  3. Monitor account usage.

Means of improving your staff’s ability to detect social engineering attacks:

  1. Educate your staff.
  2. Run social engineering simulation exercises across your organisation.

Ultimately of course the desired outcome of trying to bolster your’s organisation’s ability to detect a social engineering attack. Is a situation where the targeted user isn’t fooled by the attempt against their trust and performs accordingly, such as knowing not to click the link in an email purporting to help them retrieve their lost banking details for example.

Some additional tips:

  1. Approach all unsolicited communications no matter who the originator claims to be with skepticism.
  2. Pay close attention to the target URL of all links by hovering your cursor over them to hopefully reveal their true destination.
  3. Look to the HTTPS digital certificate of all sensitive websites you visit for identity information.
  4. Use spam filtering, Antivirus software and anti-phising software.

Cloud Security Research: Cross-Cloud Adversary Analytics

Newly published research from security firm Rapid7 is painting a worrying picture of hackers and malicious actors increasingly looking for new vectors against organizations with resources hosted in public cloud infrastructure environments.

Some highlights of Rapid7’s report:

  • The six cloud providers in our study make up nearly 15% of available IPv4 addresses on the internet.
  • 22% of Softlayer nodes expose database services (MySQL & SQL Server) directly to the internet.
  • Web services are prolific, with 53-80% of nodes in each provider exposing some type of web service.
  • Digital Ocean and Google nodes expose shell (Telnet & SSH) services at a much higher rate – 86% and 74%, respectively – than the other four cloud providers in this study.
  • A wide range of attacks were detected, including ShellShock, SQL Injection, PHP webshell injection and credentials attacks against ssh, Telnet and remote framebuffer (e.g. VNC, RDP & Citrix).

Findings included nearly a quarter of hosts deployed in IBM’s SoftLayer public cloud having databases publicly accessible over the internet, which should be a privacy and security concern to those organization and their customers.

Many of Google’s cloud customers leaving shell access publicly accessible over protocols such as SSH and much worse still, telnet which is worrying to say the least.

Businesses using the public cloud being increasingly probed by outsiders looking for well known vulnerabilities such as OpenSSL Heartbleed (CVE-2014-0160), Stagefright (CVE-2015-1538) and Poodle (CVE-2014-3566) to name but a few.

Digging further into their methodologies, looking to see whether these were random or targeted. It appears these actors are honing their skills in tailoring their probes and attacks to specific providers and organisations.

Rapid7’s research was conducted by means of honey traps, hosts and services made available solely for the purpose of capturing untoward activity with a view to studying how these malicious outsiders do their work. What’s more the company has partnered with Microsoft, Amazon and others under the auspices of projects Heisenberg and Sonar to leverage big data analytics to mine the results of their findings and scan the internet for trends.

Case in point project Heisenberg saw the deployment of honeypots in every geography in partnership with all major public cloud providers. And scanned for compromised digital certifcates in those environments. While project Sonar scanned millions of digital certificates on the internet for sings of the same.

However while the report leads to clear evidence showing that hackers are tailoring their attacks to different providers and organisations. It reads as somewhat more of an indictment of the poor standard of security being deployed by some organisations in the public cloud today. Than a statement on the security practices of the major providers.

The 2016 national exposure survey.

Read about the Heisenberg cloud project (slides).

Windows Information Protection – enabling BYO

Windows 7 has entered the extended support phase of its lifecycle.  What’s this mean? Well Microsoft won’t end security updates for your Windows 7 PC ‘s until the 14th of January 2020, so security should be covered.  However, feature updates (bug fixes), free phone and online support have already ended.  At the same time as Windows 7 leaves extended support Office 365 connection policies are changing to only allow Office clients in mainstream support to connect (that will be Microsoft Office 2016 or later and Microsoft Office 365 ProPlus)[i].  So, if you’re are running Windows 7 and/or Office 2013 or earlier now is the time to look to the future.

As we all know from the press and personal usage, the real successor to Windows 7 is the evergreen, bi-annually updated Windows 10.  The continual change of Windows 10 (aka Windows as a service) along with evergreen SaaS apps enterprises are increasingly adopting,  combined with an end user expectation of always updated and current apps (courtesy of smart phones) means the desktop strategies of yesterday (i.e. tightly managed, infrequently updated, limited or no personalisation) no longer look appropriate.

And BYO remains a hot topic for customers and pundits alike.

So how can you manage a continually changing desktop and support BYOD yet maintain the security of your data?

Microsoft have introduced a couple of capabilities to address these problems.  This blog will focus on developments in the ability to protect corporate data on lightly managed corporate or private devices – specifically, data at rest.

Windows Information Protection (WIP) is a new capability that harnesses Azure Rights Management and Intune (also available via System Center Configuration Manager) to protect data on Windows 10 Anniversary Update (build 1607) or later devices.  These are all part of the Azure Information Protection offering that addresses both client and server side protection of data.

WIP is an option under Intune -> Mobile Apps -> App Protection Policies.  As with any other Intune policy WIP can be applied to any Azure AD user group.

WIP has two main considerations regarding data security; data source and data access.

Data Source

In WIP you define network boundaries.   Below is the network boundaries blade in Intune.

Network boundary

A network boundary effectively defines where data does not need to be protected (i.e. within the boundary, say Office 365) and where it does (i.e. accessing outside the boundary such as downloading a file from Office 365, as per the figure below).


On-premises applications and file servers could be within another network boundary, as could other SaaS options.  When data is sourced externally (a PC on the internet) from within a network boundary it should be marked as “work” and encrypted, as shown below.


Data Access

WIP has the concept of “Allowed Apps”.  These are applications defined within the WIP policy to be allowed to access work data.  Below is the allowd apps blade in Intune.

allowed apps

Microsoft classifies applications into “enlightened apps” and “unenlightened apps”. Enlightened apps can differentiate between corporate and personal data, correctly determining which to protect, based on your policies.  Unenlightened apps can’t differentiate between corporate and personal data, and so all data is considered corporate and encrypted (by Windows not the app).  The Microsoft client apps (Office, Edge, Notepad, etc.) are examples of “enlightened apps”.  Finally, if an application is not defined in Allowed Apps then it can’t read work data, nor can corporate data be cut and paste into an app that is Allowed.  If the scenario where an “unenlightened app” won’t work with WIP it can be defined as exempt and the corporate data is not encrypted.

With Windows Information Protection, Windows now includes the functionality necessary to identify personal and business information, determine which apps have access to it, and provide the basic controls necessary to determine what users are able to do with business data (e.g.: Copy and Paste restrictions). Windows Information Protection is designed specifically to work with the Office 365 ProPlus and Azure Rights Management, which can help protect business data when it leaves the device or when its shared with others (e.g.: Print restrictions; Email forwarding).[i]

And this capability is available in all editions of Windows 10 Anniversary Update (build 1607 or later).

Do you need Azure RMS?

WIP is focussed on securing enterprise data on a device.  It does not address securing enterprise data in the wild.  Azure RMS provides rights management to data once it has left a device.  Azure RMS works with a fairly limited set of applications (mainly Microsoft Office across most platforms). With WIP alone a protected file can’t be shared with another user, say by USB or an external drive or even an email attachment. It will be encrypted and inaccessible.  With RMS data protection can be extended to data that leaves the device, such as an email attachment from an enlightened app (think Word, Excel, PowerPoint, OneNote, etc.) or a file on a USB drive or a cloud drive. With RMS you can audit and monitor usage of your protected files, even after these files leave your organisation’s boundaries.

Addressing your Information Protection needs (on Windows 10)

WIP is not the definitive be-all and end-all capability for protecting corporate data.  Rather it is part of a suite of capabilities that Microsoft provide on Windows 10.  BitLocker protects the device, WIP provides data separation and data leakage protection and AIP provides additional more complex data leakage protection as well as sharing protection.   These three capabilities combine to protect the data at rest, in use and when shared.

So, now enterprise data can be secured on a Windows 10 device rather than the traditional approach of securing the device; suddenly BYOD doesn’t look that scary or impractical.

[i] Taken from Introducing Windows Information Protection <>

[i] Taken from Office 365 system requirements changes for Office <>


Security Vulnerability Revealed in Azure Active Directory Connect

Microsoft ADFS

The existence of a new and potentially serious privilege escalation and password reset vulnerability in Azure Active Directory Connect (AADC) was recently made public by Microsoft.

Fixing the problem can be achieved by means of an upgrade to the latest available release of AADC 1.1.553.0.

The Microsoft security advisory qualifies the issue as important and was published on Technet under reference number 4033453:

Azure Active Directory Connect as we know takes care of all operations related to the synchronization of identity information between on-premises environments and Active Directory Federation Services (ADFS) in the cloud. The tool is also the recommended successor to Azure AD Sync and DirSync.

Microsoft were quoted as saying…

The update addresses a vulnerability that could allow elevation of privilege if Azure AD Connect Password writeback is mis-configured during enablement. An attacker who successfully exploited this vulnerability could reset passwords and gain unauthorized access to arbitrary on-premises AD privileged user accounts.

When setting up the permission, an on-premises AD Administrator may have inadvertently granted Azure AD Connect with Reset Password permission over on-premises AD privileged accounts (including Enterprise and Domain Administrator accounts)

In this case as stated by Microsoft the risk consists of a situation where a malicious administrator resets the password of an active directory user using “password writeback”. Allowing the administrator in question to gain privileged access to a customer’s on-premises active directory environment.

Password writeback allows Azure Active Directory to write passwords back to an on-premises Active Directory environment. And helps simplify the process of setting up and managing complicated on-premises self-service password reset solutions. It also provides a rather convenient cloud based means for users to reset their on-premises passwords.

Users may look for confirmation of their exposure to this vulnerability by checking whether the feature in question (password writeback) is enabled and whether AADC has been granted reset password permission over on-premises AD privileged accounts.

A further statement from Microsoft on this issue read…

If the AD DS account is a member of one or more on-premises AD privileged groups, consider removing the AD DS account from the groups.

CVE reference number CVE-2017-8613 was attributed to the vulnerability.

Quickly deploying all the things from Excel with PowerShell

As an ex-Excel Developer, I tend to resolve any perceived inefficiencies when dealing with any tabular data by automating the snot out of it with Excel and VBA. As a SharePoint developer, there is no shortage of stuff that we need to provision into SharePoint to build business solutions, often repeatedly throughout the development cycle or between environments (Dev, Test, UAT, etc).

PowerShell was always going to be the automation tool of choice, and while PowerShell scripts are commonly fed with CSV files of flat tabular data, editing those has always been painful if you’ve got more than a handful of rows. Excel has always been a far better tool than a text editor for quickly editing tabular data, thanks to features like autofill, but that’s just the tip of the iceberg…

PowerShell Deployment Agent

So, I’ve been using Excel and VBA to deploy things for a long time (about 10 years at best guess). I’ve changed companies a handful of times over that period, and every time I do I create a new and better Excel model for generating stuff in whatever platform I was working with. Last year, I started to open source that model as it had reached a maturity level where I no longer want to start over – it’s pretty darned solid. The simplest model of this was called CSV-Exporter and it did what it said on the tin. I’ve extended it into what I now call my PowerShell Deployment Agent (PSDA), which doesn’t just export CSVs from Excel Tables, but also launches a PowerShell script of your choosing to streamline the deployment process.

By careful script design, Excel filtering, and of course some VBA, this allows for some very fine-grained control over exactly what you want to deploy. To the extent that you can just highlight a bunch of rows in a sheet and hit CTRL-SHIFT-D to deploy them to your target:


When / Why would you use it?

Excel’s always been a good way to store configuration data, but it really comes into it’s own as launching pad for PowerShell when you have rows of data that need to be pumped through the same cmdlet or function.

To prove how easy it is to get started, we’ll go with the the use-case that I’m usually working with: SharePoint.

So let’s open the workbook and pick something simple that we want to deploy. How about folders in a document library?

All we need to do once we’ve input our target site is:

  • Find the PnP PowerShell cmdlet we want (Add-PnPFolder in this case)
  • Click the ‘New Blank Sheet’ button
  • Select the name for our sheet (‘Folders’ in this case – a new .ps1 file with the same name will be created from a template in the ‘Functions’ subfolder under the path to your deployment script)
  • Copy and paste the cmdlet signature to let PSDA know what columns to map to the cmdlet parameters,
  • Fill in our data
  • Double click the URL of the target site we want to deploy to


This is obviously the most basic scenario and there’s a good chance that you’ll want to customize both the auto-generated script and the table (with some of the advanced features below).

Speaking of… what other benefits of using Excel over raw CSVs are there? I’m glad you asked.


You obviously can’t have a calculated value in a CSV file, which means that your PowerShell script is more complex than it needs to be, by performing that calculation on each row at run-time. Excel is clearly the superior tool here – you can see if your calculation is correct right there in the cell.

PSDA Perk – Formulas are exported as values when you are deploying against a target, but are preserved when you want to export your data for versioning, etc.


You obviously can’t have comments in a CSV, but these are very useful in Excel, particularly in column headers to advise what data/format to put in that column.

PSDA Perk – You can preserve column header comments when exporting from the workbook. They will be reinstated when re-importing.

Data Validation

Want to guard against data entry errors while keeping your script clean? Use Excel to prevent erroneous entries before deployment with data validation. You can restrict based on a formula, or a reference cell range (there’s a reference sheet in the workbook for that).

PSDA Perk – Data validation rules will be exported and reinstated on re-import.

Conditional Formatting

In addition to data validation, conditional formatting is a powerful way to show that some data is incorrect or missing under certain conditions. Obviously, from a data entry standpoint, we don’t need anything too fancy here – usually setting the font or background of the cell to red when a formula evaluates to false is all we need to prompt the user.

PSDA Perk – Basic conditional formatting (as per the above) can be exported and reinstated on re-import.

A Note on Credentials

If you went to the trouble of downloading the workbook and poking it with a stick, you’ll note that there is a single column in the launch table on the control sheet for the credential to use for each target environment. Nowhere to put a password. Because you shouldn’t be storing passwords in Excel workbooks. Ever.

Instead, you should be using something like CredentialManager, which leverages the Windows Credential Manager (a safe place to store your admin passwords). Which means that you just refer to the label of the credential in WCM in the workbook. Nice and clean and means that anyone getting your workbook doesn’t have the keys to your environments listed.

If you are using the outstanding PnP PowerShell cmdlets for deploying to SharePoint Online/On Premises, you get this functionality OOTB (no need to use CredentialManager).

Thanks for checking this out and please try out the workbook and let me know what you like or needs improvement.


Implementing Bootstrap and Font-awesome in SharePoint Framework solutions using React

Responsive Design has been the biggest driving factor for SharePoint framework (SPFx) solutions. In a recent SPFx project for a customer, we developed a component using React, Bootstrap and Font-awesome icons for a responsive look and feel. While building the UI piece, we encountered many issues during the initial set up, so I am writing this blog with detail steps for future reference. One of the key fixes mentioned in this post, is for the WOFF2 font-type file which is a component in font-awesome and bootstrap.

In this blog post, I will not be detailing the technical implementation (business logic and functionality) just focusing on the UI implementation. The following steps outline how to configure SPFx React client side web parts to use Bootstrap and Font-awesome CSS styles


  1. Create a SharePoint Framework project using Yeoman. Refer this link from Microsoft docs for reference.
  2. Next, install JQuery, Bootstrap and Font-awesome using npm so that it can be available from within node_modules
    npm install jquery --save
    npm install @types/jquery --save-dev
    npm install bootstrap --save
    npm install @types/bootstrap --save-dev
    npm install jquery --save
    npm install @types/jquery --save-dev

    Check the node_modules folder to make sure they got installed successfully

  3. Now locate config.json file in config folder and add the entry below for third party JS library references.
    "externals": {
          "jquery": {
          "path": "node_modules/jquery/dist/jquery.min.js",
          "globalName": "jQuery"
        "bootstrap": {
          "path": "node_modules/bootstrap/dist/js/bootstrap.min.js",
          "globalName": "bootstrap"

    Then reference them in the .ts file in the src folder using import

    import * as jQuery from "jquery";
    import * as bootstrap from "bootstrap";
  4. For CSS reference, we can either refer to the public CDN links using SPComponentloader.loadCss() or else refer to the local version as below in the .tsx file
    Note: Don’t use ‘require’ for js scripts as they are already imported in above step. If included again it will cause a component load error.

  5. When using React, copy the html to the .tsx file in the components folder. If you want to use the HTML CSS classes as-is and not the SASS way, refer to this blog post. For image references, here is a good post to refer.
    For anyone new to React as me, few tips below for styling:
    1. Use className instead of HTML class attribute
    2. In order to use inline styles, use style={{style attributes}} or define an object, since everything in JSX are elements
  6. When ready, use gulp serve to launch your solution in local workbench.
    Important: If you’re using custom icons or fonts from the above CSS libraries, you will receive Typescript errors saying that loader module was not found for WOFF2 font type. Here, you will need to push the custom loader for WOFF2 font type through gulpfile.js as below.First install url-loader from npm.

     npm install url-loader --save-dev

    Then modify gulpfile.js at the root directory to load the custom loader.

      additionalConfiguration: (generatedConfiguration) => { 
          { test: /\.woff2(\?v=[0-9]\.[0-9]\.[0-9])?$/, loader: 'url-loader', query: { limit: 10000, mimetype: 'application/font-woff2'} } 
        return generatedConfiguration; 

    Now gulp serve your solution and it should work fine.

You might still face CSS issues in the solution as the referring CSS doesn’t exactly match the HTML-CSS implementation. To resolve any conflicts, use CSS Override (!important) wherever necessary.

In this post I have shown how we can configure bootstrap and font-awesome third party CSS files to work with SharePoint Framework solutions while leveraging React Framework.