Weekly AWS update: Friday 15th February 2019

Well, it’s Friday again and that can only mean one thing…. It’s time again for my weekly update on all things AWS. Last week was a big week for developers and while this week has also seen a number of new features for our developer friends, Amazon Web Services has also brought us new instance types, storage options and functionality to what’s becoming a favourite of mine, Amplify. This article continues our weekly series on the happenings in the world of Amazon Web Services. It’s not meant to be an exhaustive list of all the updates and changes to the AWS Eco-system, but simply a summary of changes that might have an impact on the business and trends we at Kloud are seeing within the industry. As always, if you would like to talk to somebody about how you might be able to leverage some of these new technologies and services, please feel free to reach out using the contact link at the top of the page.

The key take away’s from this week are:

  • Amazon Corretto 11 is Now in Preview
  • Amplify Framework Adds new features
  • Five New Amazon EC2 Bare Metal Instances
  • Amazon EFS Introduces Lower Cost Storage Class
  • Amazon GuardDuty Adds Three New Threat Detection’s

Amazon Corretto 11 is Now in Preview

As I mentioned last week, AWS Corretto has recently reached General Availability for Corretto version 8 and that AWS are planning to release version 11 before April of this year. Well, on Wednesday AWS announced that Corretto version 11 has now reached preview and is available for download from the Corretto product page here. Amazon Corretto is a no-cost, multi-platform, production-ready distribution of the Open Java Development Kit (OpenJDK). For an introduction to Amazon Corretto, you can visit the announcement page here. We will be keeping an eye on the progression of version 11 and will update you on its progress.

Amplify Framework Adds new features

And while we’re on the topic of new features that will delight the AWS developers among us, this week also saw an update to AWS Amplify. The Amplify CLI, part of the Amplify Framework, now supports multiple environments and teams by offering a Git style workflow for creating and switching between environments for your Amplify project. When you work on a project within a team, you can create isolated back-ends per developer or alternatively share back-ends across developers, including to those outside your organisation. This week’s announcement sees the introduction of several new features for the Amplify framework including:

  • support for IAM roles and MFA (Multi-Factor Authentication)
  • Custom resolvers for AWS AppSync
  • Increase in support for 150 Graphql transformer models, up from 15
  • Support for multiple environments

These new announcements really open up some of the possibilities that Amplify can solve within enterprise and large team environments. The addition of support for IAM roles and MFA means it now supports standard best practise deployments (everybody should have MFA enabled on their IAM accounts and if you don’t, do it now… I’ll wait), while the added support for multiple environments is going to greatly simplify the workflows within larger teams keen to leverage Amplify’s capabilities. The addition of custom resolver support (such as Amazon DynamoDB tables, Amazon Elasticsearch Service domains, or HTTP endpoints that were provisioned independently of the Amplify GraphQL Transformer) from within your Amplify project and the increase in the number of supported transformer models already has my mind racing with possibilities, so don’t be surprised if you see more Amplify focused articles from me in the future.

Five New Amazon EC2 Bare Metal Instances

Did somebody say new instance? I think they did. Another announcement on Wednesday saw the Bare Metal team release 5 new instances in a range of regions throughout the world, and yes… Sydney is in the list (or at least is in the list for some of them). The release sees the addition of five (5) new instances within the Bare Metal family, including:

  • Metal, a 48 physical/96 logical core instance with 384 GB of RAM, 25Gbps of available network bandwidth and 14,000 Mbps of EBS Bandwidth.
  • Metal, the same as it’s M5 counterpart only with the addition of 4 x 900GB NVMe SSD local drives.
  • Metal, a 48 physical/96 logical core instance with 796 GB of RAM, 25Gbps of available network bandwidth and 14,000 Mbps of EBS Bandwidth.
  • Metal, the same as it’s R5 counterpart only with the addition of 4 x 900GB NVMe SSD local drives.
  • Metal, a 24 physical/48 logical core instance with 384 GB of RAM, 25Gbps of available network bandwidth, 14,000 Mbps of EBS Bandwidth and 4 x 900GB NVMe SSDs.

For a full listing on where these new instances are available, you can visit the announcement here, but both the M5.Metal and M5D.metal are available in Sydney and are ready for deployment.

Amazon EFS Introduces Lower Cost Storage Class

Next cab off the rank is yet another announcement on Wednesday (Wednesday was a busy day in Seattle), this time from our storage friends with the release of a lower cost storage class for Elastic File Service. This one is exciting as I always like announcements that can save me money. In case you’ve not heard (and you might not have if you live in the windows world), EFS provides a simple, scalable, elastic file system for Linux-based workloads for use with AWS Cloud services and on-premises resources. With this new feature, you can create a new EFS file system and configure it as Infrequently used storage (S3 users should see where we are going here) and apply Life-cycle management policies to automatically move infrequently accessed files to the new storage tier. Much like the S3 equivalent, Infrequently Accessed storage comes at a much cheaper price (at the time of writing, Standard EFS is $0.36 per GB for standard and $0.054 per GB for IA in the Sydney region) however you must also pay a charge for access requests (currently $0.12 per GB transferred) when that IA data is transferred off the storage.

Why am I so excited about this, well if we configure a new File System and a Life-cycle Management policy to automatically migrate any data that hasn’t been accessed in 30 days to it, we instantly start saving money without having to change anything at the server end (no changes to workflow or application settings). Guess I know what I’ll be doing over the weekend.

Amazon GuardDuty Adds Three New Threat Detection’s

And finally, for this week’s roundup, we have three (3) new features from our friends over in GuardDuty. GuardDuty has very quickly become an “on by default” service for us here at Kloud as the benefits you gain from its insights are invaluable and these three new additions only make it more attractive for anybody running workloads in AWS. As stated in the product documentation “Once enabled, Amazon GuardDuty continuously monitors for malicious or unauthorised behaviour to help protect your AWS resources, including your AWS accounts and access keys. GuardDuty identifies unusual or unauthorised activity, like cryptocurrency mining or infrastructure deployments in a region that has never been used. When a threat is detected, you are alerted with a GuardDuty security finding that provides detail of what was observed, and the resources involved. Powered by threat intelligence and machine learning, GuardDuty is continuously evolving to help you protect your AWS environment.

These three new features add the ability to alert when access requests are identified as coming from penetration testing focused operating systems (such as Parrot and Pentoo Linux. Kali has been identified for a while) as they are unlikely to be legitimate traffic. The third new feature is a new policy violation detection policy that alerts you to any request in which AWS account root credentials are used. This one makes monitoring of your root account a tick box on the audit checklist as nobody should EVER be using their Root account to perform tasks (and if you are, please call us and we’ll help you fix it), so any requests originating from the root account should be treated as suspicious.

And that’s it for the AWS update for Friday the 15th of February 2019. Please keep an eye out for our weekly updates on the happenings within the AWS Eco-system and for the continuation of my upcoming blogs on new AWS products and features.

AWS Site-to-Site VPN and Transit Gateway

I recently implemented an AWS site-to-site VPN for a customer to connect their on-premise network to their newly deployed AWS account.

The requirement was network level connectivity from their on-premise network to their management VPC. Support of production VPC resources would be carried out from bastion hosts in the management VPC.

The setup of this was simple from an AWS perspective. With Cloud Formation we deployed a Customer Gateway (CGW) using the IP address of their on-premise firewall, created a Virtual Private Gateway (VPG) and then the VPN Gateway (VPN).


The on-premise configuration took a bit of research to get right but once configured correctly it worked as expected.

However, after deployment it was determined that an on-premise server needed to connect to a Production VPC resource.

We already have a VPC peer connection between the management and production VPCs, but a VPN will only route traffic to the VPC it is connected to and VPC peer connections are not ‘transitive’,


For a more detailed explanation of this see; https://docs.aws.amazon.com/vpc/latest/peering/invalid-peering-configurations.html

The solutions considered to allow access from on-premise to the production VPC resource were;

  • Create another VPN connection from the on-premise datacenter to the production VPC
  • Deploy an application proxy in the management VPC
  • Deploy the newly announced AWS Transit Gateway service

The customer wasn’t keen on adding VPN connections, as it would add configuration and complexity to the on-premise firewall, and we weren’t confident that an application proxy would work, so we decided on the new Transit Gateway service.

AWS Transit Gateway

AWS Transit Gateway was release at the end of 2018. It will allow our customer to connect their on-premise network to both of their AWS VPCs, and any future VPCs, without having to configure and support multiple VPN endpoints on their on-premise firewall and support multiple VPN gateways in AWS.


The steps to implement this were fairly simple, however Cloud Formation doesn’t cover all of the steps;

Deploy the ‘Transit Gateway’ and ‘Transit Gateway Attachment’ for the VPCs

This Cloud Formation template assumes that two VPCs already exist, and each have one subnet.
The VPC and subnet Ids need to be entered into the parameters section. The on-premise VPN endpoint is setup after the AWS VPN setup, but the IP address is added to the VPNAddress parameter. This also assumes a non BGP on-premise endpoint.

{
    "AWSTemplateFormatVersion": "2010-09-09",
    "Description": "Transit Gateway",
    "Parameters": {
        "VPCIdMgmt": {
            "Type": "String",
            "Description": "Management VPC Id",
            "Default": "vpc-12345678901234567"
        },
        "VPCIdProd": {
            "Type": "String",
            "Description": "Production VPC Id",
            "Default": "vpc-9876543210987654"
        },
        "MgmtPrivateAzASubnetId": {
            "Type": "String",
            "Description": "Az A Subnet in Mgmt VPC",
            "Default": "subnet-23456789012345678"
        },
        "ProdPrivateAzASubnetId": {
            "Type": "String",
            "Description": "Az A Subnet in Prod VPC",
            "Default": "subnet-34567890123456789"
        },
        "VPNAddress": {
            "Type": "String",
            "Description": "On-premise VPN endpoint",
            "Default": "201.65.1.1"
        }
    },
    "Resources": {
        "CustomerGateway": {
            "Type": "AWS::EC2::CustomerGateway",
            "Properties": {
                "Type": "ipsec.1",
                "BgpAsn": "65000",
                "IpAddress": {
                    "Ref": "VPNAddress"
                }
            }
        },
        "TransitGateway": {
            "Type": "AWS::EC2::TransitGateway",
            "Properties": {
                "AmazonSideAsn": 65001,
                "DefaultRouteTableAssociation": "enable",
                "DefaultRouteTablePropagation": "enable",
                "Description": "Transit Gateway",
                "DnsSupport": "enable",
                "VpnEcmpSupport": "enable"
            }
        },
        "TransitGatewayMgmtAttachment": {
            "Type": "AWS::EC2::TransitGatewayAttachment",
            "Properties": {
                "SubnetIds": [{
                    "Ref": "MgmtPrivateAzASubnetId"
                }],
                "TransitGatewayId": {
                    "Ref": "TransitGateway"
                },
                "VpcId": {
                    "Ref": "VPCIdMgmt"
                }
            }
        },
        "TransitGatewayProdAttachment": {
            "Type": "AWS::EC2::TransitGatewayAttachment",
            "Properties": {
                "SubnetIds": [{
                    "Ref": "ProdPrivateAzASubnetId"
                }],
                "TransitGatewayId": {
                    "Ref": "TransitGateway"
                },
                "VpcId": {
                    "Ref": "VPCIdProd"
                }
            }
        }
    },
    "Outputs": {
        "CustomerGateway": {
            "Description": "CustomerGateway Id",
            "Value": {
                "Ref": "CustomerGateway"
            },
            "Export": {
                "Name": "TransitGateway-CustomerGatewayId"
            }
        },
        "TransitGateway": {
            "Description": "TransitGateway Id",
            "Value": {
                "Ref": "TransitGateway"
            },
            "Export": {
                "Name": "TransitGateway-TransitGatewayId"
            }
        }
    }
}

After the Cloud Formation stack is deployed, the ‘Outputs’ section will list the CustomerGatewayId and TransitGatewayId, these are needed in the next steps

Create the site to site VPN

This step is completed in the AWS CLI as Cloud Formation doesn’t support it yet. Change customer-gateway-id and transit-gateway-id to the values in the output section of the Cloud Formation stack, or look it up in the AWS console. 

aws ec2 create-vpn-connection --customer-gateway-id cgw-045678901234567890 
--transit-gateway-id tgw-56789012345678901  --type ipsec.1 --options "{\"StaticRoutesOnly\":true}"

Create the VPN Transit Gateway route

The attached VPCs have had routes added by default, but as we are using non-BGP on-premise endpoint, the VPN needs routes specifically added.

The route we are adding here is the CIDR of the on-premise network e.g. 172.31.0.0/16

Get the Id of the ‘Transit Gateway Route Table’ and VPN’s ‘Transit Gateway Attachment Id’ from the AWS console under ‘Transit Gateway Route Tables’ and ‘Transit Gateway Attachments’

aws ec2 create-transit-gateway-route --destination-cidr-block 172.31.0.0/16 
--transit-gateway-route-table-id tgw-rtb-67890123456789012 --transit-gateway-attachment-id tgw-attach-7890123456789012

Configure VPC subnet routing

The routing that you will see configured on the Transit Gateway is only used within the Transit Gateway itself. So we now need to manually add routes to VPC subnets that you want to use the VPN.

In our case we are leaving VPC-VPC traffic to use the VPC peer, and only adding an on-premise network to the subnet routes.

Get the Transit Gateway Id from the Cloud Formation template output, and get the route VPC subnet’s route table Id;

aws ec2 create-route --route-table-id rtb-89012345678901234 --destination-cidr-block 172.31.0.0/16 --transit-gateway-id tgw-56789012345678901

Summary

Using a Transit Gateway can make site-to-site VPNs simpler and less messy by allowing a single VPN connection to AWS that can reach more than one VPC.

One important limitation is that Transit Gateway doesn’t yet support security groups.

If you use security groups over VPC peer connections, and switch from VPC peer connections to Transit Gateway, you will see your security groups become listed as ‘stale’, you would need to re-add them as IP based rules.

The Transit Gateway FAQs states the following;

  •  Q: Which Amazon VPC features are not supported in the first release?
  • A: Security Group Referencing on Amazon VPC is not supported at launch. Spoke Amazon VPCs cannot reference security groups in other spokes connected to the same AWS Transit Gateway. 

This implies that it will be supported in the future, but for now resource access between VPCs using security groups must remain over VPC Peer connections.

More information on Transit Gateway
https://docs.aws.amazon.com/vpc/latest/tgw/working-with-transit-gateways.html

Weekly AWS update: Friday 8th February 2019

DEVELOPERS, DEVELOPERS, DEVELOPERS… oh wait, wrong cloud. Regardless of who said those words, this week has been a busy one for our friends over at Amazon Web Services with a host of new products and features that are sure to delight the developers among us. This article continues the weekly series we are doing this year to help customers with a brief overview of the happenings within the AWS world over the last week. This is to try and help surface some of the more important announcements. This is not meant to be an exhaustive list of all the updates and changes to the AWS eco-system. It’s simply a summary of changes that might have an impact on the business and trends we at Kloud are seeing within the industry. As always, if you would like to talk to somebody about how you might be able to leverage some of these new technologies and services, please feel free to reach out using the contact link at the top of the page.

The key take away’s from this week are:

  • AWS X-Ray SDK for .NET Core is Now Generally Available
  • Amazon SNS Message Filtering Adds Support for Multiple String Values in Blacklist Matching
  • Develop and Test AWS Step Functions Workflows Locally
  • Amazon DynamoDB Local Adds Support for Transactional APIs, On-Demand Capacity Mode, and 20 GSIs
  • Amazon Corretto is Now Generally Available

AWS X-Ray SDK for .NET Core is Now Generally Available

For those who aren’t aware or who haven’t had the chance to leverage it yet, AWS X-Ray is an analysis and debugging tool that helps developers analyse and debug production, distributed applications, such as those built using a microservices architecture. At its core, it helps developers and support staff maintain a level of visibility of individual requests as they traverse multiple interconnected micro services resulting in quicker analysis and faster issue resolution. This week release brings X-Ray integration to .NET Core functions and services. As stated in the announcement article (available here) ” You can use AWS X-Ray to view a map of your applications and its services in development and in production. Your applications can be simple three-tier applications to complex micro services consisting of thousands of services such as built using AWS Lambda.

You can get the AWS X-Ray SDK for .NET Core from the X-Ray GitHub repository

Amazon SNS Message Filtering Adds Support for Multiple String Values in Blacklist Matching

Next on the list is the addition of support for multiple string values in blacklist matching within Amazon Simple Notification Service. Amazon SNS message filtering allows you to leverage AWS SNS to perform messaging filtering across your pub/sub solution without having to handle the logic within your applications infrastructure, reducing operational complexity and cost. To date, you’ve been able to match messages based on string white and black listing as well as string prefix and numerical matching. This new addition adds the ability to use multiple string values within your blacklisting operators further increasing the flexibility of the SNS message filtering service. This new addition should allow more customers to transition their existing EC2 or Lambda hosted filtering logic into AWS SNS, further reducing their operational footprint. Further information on the new feature release can be found here and detailed instruction on getting started with messaging filtering is available within the SNS developer guide.

Develop and Test AWS Step Functions Workflows Locally

One of the issues with migrating logic and application features into AWS native services is that you are then required to conduct all of your development and testing activities while connected to the cloud. Well, with the announcement made earlier this week AWS Step functions can now be developed and tested on your local development machine through the use of the new AWS Step Functions local. Quoting from the official feature announcement (available here) “AWS Step Functions Local is a downloadable version of Step Functions that lets you develop and test applications using a version of Step Functions running in your own development environment. Using the service locally rather than over the Internet can be faster in some situations, save on Step Functions state transitions, and allow you to easily enforce sandbox restrictions.” This means that I can finally work on my step functions when I’m flying around the country. It’s available now for anybody to get started with in both JAR and Docker versions ready for download. I hope to have an article out shortly with my first impressions of the tool set.

Amazon DynamoDB Local Adds Support for Transactional APIs, On-Demand Capacity Mode, and 20 GSIs

But that’s not all for development of cloud resources on a local machine…. Not at all. AWS have also announced and update to DynamoDB Local with the addition of several new features including:

  • Transactional API’s
  • On-Demand Capacity Mode
  • As many as 20 Global Secondary Indexes per table.

DynamoDB Local has been around for a while now and has been a part of my development toolkit for almost as long. With the addition of these new features, I can now test and validate additional aspects of my DynamoDB tables without having to interrupt my workflow and test it in a cloud environment. This includes simulating On-Demand behaviours and GSI focused operations. You can be sure I’m going to be giving this a good run through in the coming weeks and will post an update with my experience. For those who want to get started for themselves, the link is available on the Setting up DynamoDB Local page.

Amazon Corretto is Now Generally Available

And finally, for this week’s roundup, is yet another developer update with the announcement that AWS Corretto has reached General Availability after being in preview since its original announcement back in November of last year. In case you didn’t hear the original announcement, Amazon Corretto is a no-cost, multiplatform, production-ready distribution of the Open Java Development Kit (OpenJDK). As stated in the official announcement (available here), there are far to many updates to be listed in a single article sufficed to say that it’s been updated to OpenJDK version 8u202 and that a more comprehensive list of platforms is now supported, including Amazon Linux 2 and an official Docker image. AWS also note that they are currently working on Corretto 11 corresponding to OpenJDK 11 and will release it in time for testing before April of this year.

And that’s it for the AWS update for Friday the 8th of February 2019. Please keep an eye out for our weekly updates on the happenings within the AWS eco-system and for the continuation of my upcoming blog. Further articles will include my expereinces testing out the new updates to DynamoDB Local and Step Functions Local.

Weekly AWS update: Friday 1st February 2019

And here we are, in February of 2019 already… 1/12 of the year has already been and gone. This week it’s been a little quiet in the world of Amazon Web Services, but there’s still been several announcements and releases this week that will help those building and developing in the World of AWS. This article continues the weekly series we are doing this year to help customers with a brief overview of the happenings within the AWS world over the last week to try and help surface some of the more important announcements. As always, this is not meant to be an exhaustive list of all the updates and changes to the AWS eco-system, but simply a summary of changes that might have an impact on the business and trends we at Kloud are seeing within the industry. If you would like to talk to somebody about how you might be able to leverage some of these new technologies and services, please feel free to reach out using the contact link at the top of the page.

The key take away’s from this week are:

  • Amazon ECS and Amazon ECR now have support for AWS PrivateLink
  • Amazon RDS for PostgreSQL Now Supports T3 Instance Types
  • AWS CodeBuild Now Supports Accessing Images from Private Docker Registry

AWS PrivateLink support in ECS and ECR

Kicking this off for this week is the announcement last Friday that Amazon Elastic Container Service (ECS) and Amazon Elastic Container Repository (ECR) now have support for AWS PrivateLink. For those who are not aware, “AWS PrivateLink is a networking technology designed to enable access to AWS services in a highly available and scalable manner, while keeping all the network traffic within the AWS network. When you create AWS PrivateLink endpoints for ECR and ECS, these service endpoints appear as elastic network interfaces with a private IP address in your VPC.

Prior to this announcement, if you had EC2 instances that required access to an ECR repository or the ECS control plane, they needed to communicate with them across the public internet. While this isn’t a problem for most people, it does mean that some traffic previously had to leave your trusted network to gain access to some of your AWS services. With the addition of AWS PrivateLink support, your resources can now access these services via your PrivateLink resulting in a simplified networking solution. This capability will be particularly helpful for those organisations running outbound white listing on your internet connectivity. This new feature is available now in all ECS and ECR region. It’s important to note that AWS PrivateLink support in AWS Fargate is coming soon. If you’d like to know more about how to setup your AWS PrivateLink with ECS and/or ECR resources, you can visit the AWS Blog article available here.

Amazon RDS for PostgreSQL Now Supports T3 Instance Types

Next cab off the rank for this week is another announcement from last Friday when AWS announced that PostgreSQL now support running on T3 Instances. If your currently running PostgreSQL version 9.6.9 (or higher) or 10.4 (or higher) you can transition to the new instance type via the AWS Management Console. For those who might not be aware, the T3 instances where released last year and are the next generation of the burstable instance types. If you haven’t already taken a look or trailed at the T3 series, I recommend that you do as we have typically seen savings when compared to the T2 series.

The addition of the T3 support to Amazon RDS for PostgreSQL is available now in all regions other than US-GovCloud, Mumbai and Osaka. It’s also important to note that (quoting from the AWS RDS Pricing Page) “Amazon RDS T3 DB instances run in Unlimited mode, which means that you will be charged if your average CPU utilisation over a rolling 24-hour period exceeds the baseline of the instance. CPU Credits are charged at $0.075 per vCPU-Hour. The CPU Credit pricing is the same for all T3 instance sizes across all regions and is not covered by Reserved Instances.”

AWS CodeBuild Now Supports Accessing Images from Private Docker Registry

Previously when using AWS CodeBuild, you where only able to access Docker images from public DockerHub repositories or those stored in Amazon Elastic Container Repository (ECR). With this announcement, you can now leverage any private Docker repository either within your Virtual Private Cloud (VPC) or on the public internet (note that if you want to access a repository within a VPC, you will need to configure the VPC setting within your CloudBuild Project).

This functionality leverages AWS Secrets Manager, where you can store the required credentials for accessing your private repository. This functionality is available in all CodeBuild region today. For instructions on how to get your CodeBuild project working with your private repository, visit the AWS CodeBuild documentation available here.

And that’s it for the AWS update for Friday the 1st of February 2019. Please keep an eye out for our weekly updates on the happenings within the AWS Eco-system and for the upcoming article on getting started with AWS WorkLink.

 

 

 

A tale of two products (don’t expect Dickens)

At Re:Invent and just after, AWS released several new products. Included in those were AWS FSx Windows and AWS Backup. Both of these products had a lot of interest for me, for various reasons, so I thought I’d give them a try. None of my experience was under work conditions, but the following are my experiences. Note: Both are only in a small number of regions, currently.

AWS FSx Windows

Pros:

  • Easy setup (by itself)
  • Fully compatible Windows file server
  • DFS support
  • Has backups
  • Works as expected

Cons:

  • Requires AWS Microsoft AD in each VPC
  • Can’t change file share size
  • Some features can only be changed from CLI
  • Throughput can only be changed through restore
  • Minimum share size is 300GB

First out of the box, and released at Re:Invent is AWS FSx Windows. AWS Elastic File System has been around for a while now and works nicely for providing a managed NFS share. Great for Linux, not so good for Windows. Your Windows sharing options are now enhanced with AWS FSx Windows. This is an AWS managed Windows File Server, running on Windows Data Centre. When you go to do the setup, there are only a few options, so it should be pretty easy, right? Well, yes and no. Actually configuring FSx Windows is easy, but before you do that, you need to have an AWS Microsoft AD directory service (not just an EC2 running AD) in the VPC that you are launching FSx Windows. If you’re running Windows based workloads, you’ll likely have a Windows admin, so this shouldn’t be too hard to configure and tie into your regular AD. OK, so, I’ve got my AWS Window AD, what’s next? Well, jump into the FSx Windows console, enter the size, throughput (default 8MB/s), backup retention and window, and maintenance window. That’s it. You now have a Windows file share you can use for all your Windows file sharing goodness.

FSx Windows FS Creation

But, what’s the catch? It can’t be that easy? Well, mostly it is, but there are some gotchas. With EFS, you don’t specify a size. You just use space and Amazon bill you for what you use. If you throw a ton of data on the share, all good. It just grows. With FSx Windows, you have to specify a size, min 300GB, at creation. Right now, there is no option to grow the share. If you run out of space, you’ll need to look at DFS (fully supported). Create a new FSx Windows share and use your own DFS server to manage merging the two shares into a single namespace. While on the topic of DFS, FSx Windows is just single AZ. If you want redundancy, you’ll need to create a second share and keep the data in sync. AWS has a document on using DFS for a multi-AZ scenario: https://docs.aws.amazon.com/fsx/latest/WindowsGuide/multi-az-deployments.html

What other issues are there? Well, a semi-major one and a minor one. The major one is there is currently no way to easily change the throughput, or at least not that I’ve found. When doing a restore, you can choose the throughput, so you can restore with a new throughput, blow away the old share and then map to the new one. Not terrible, but a bit painful. For backup changes, time and retention, or scheduled maintenance window changes, those can only be done via CLI. It would be nice to have that option in the console, but not really a huge deal.

And there you have it. An AWS managed Windows File Server. Having to rely on an AWS Microsoft AD in each VPC is a bit of pain, but not terrible. Overall, I think this is a great option. Once it rolls out to GA it’s definitely worth investigating if you currently have EC2 instances running as Windows File Servers.

AWS Backup

Pros:

  • Easy EFS backup
  • RDS & Dynamo DB backup

Cons:

  • No EC2 backup (EBS isn’t the same)
  • Dashboard is fairly basic

Last year AWS launched their first foray into managed backups with Data Lifecycle Manager (DLM). This was very basic and helped with the scheduling and life cycle management of EBS snapshots. The recent announcement of AWS Backup sounded like this would be a big step forward in AWS’s backup offering. In some ways it is, but in others it is still very lacking. I’ll start with the good, because while these are great, they are relatively quick to cover. RDS and DynamoDB expands on what is already offered past the traditional 35-day mark. The big surprise, and much needed feature, was support for EFS backups. Previously, you had to roll your own, or use an AWS Answer provided by AWS to do an EFS to EFS backup. It worked, but was messy. This option makes it really easy to backup an EFS volume. Just configure it into a Backup Plan and let AWS do the work! There’s not a lot to say about this, but it’s big, and may be the main reason people include AWS Backup in their Data Protection strategy.

AWS Backup - Dashboard01

Now for the bad; there is no EC2 backup or restore. But wait! What about the EBS backup & restore option? That’s OK, but really, how many people spin up EBS volumes and only want to protect the volume? You don’t. You spin up an EC2 instance (with attached volumes) and want to protect the instance. Sadly, this is where AWS Backup falls down and products like Cloud Protection Manager from N2W Software shine. When you have an issue that requires restoring an EC2 instance from backup, it’s generally not in a calm situation with plenty of time to stop and think. There’s a disaster of some form and you need that server back now! You don’t want to have to restore each EBS volume, launch an instance from the root volume, make sure you know what volumes were attached in what order and with what device name and have all the details of security groups, IPs, etc. You just want to hit that one button that says “restore my instance” and be done. That’s what tools like CPM give you and what is sorely lacking in this offering from AWS Backup.

Summary

What’s my wrap up from my “Tale of two products”? For FSx Windows, if you have a need for a Windows File Server, wait until it comes to your region and go for it. With AWS Backup, it has a place, especially if you have important data in EFS, but it’s no replacement for a proper backup solution. Most likely this will be implemented in a hybrid arrangement with another backup product.

Note: AWS Backup also manages backups of AWS Storage Gateway. I didn’t have one to test, so I won’t comment.

Weekly AWS update: Friday 25th January 2019

Well, it’s Australia Day weekend once again and our friends over at Amazon Web Services have been keeping themselves very busy this last week with several key announcements and releases that have a special place in the heart of us Australians. This article continues the weekly series we are doing this year to help customers with a brief overview of the happenings within the AWS world over the last week to try and help surface some of the more important announcements. This is not meant to be an exhaustive list of all the updates and changes to the AWS eco-system, but simply a summary of changes that might have an impact on the business and trends we at Kloud are seeing within the industry. As always, if you would like to talk to somebody about how you might be able to leverage some of these new technologies and services, please feel free to reach out using the contact link at the top of the page.

The key take away’s from this week are:

  • Amazon WorkLink – Secure, One-Click Mobile Access to Internal Websites and Applications
  • TLS termination on Network Load Balancers
  • PROTECTED Status for Australia
  • Update to AWS Trusted Advisor

Product Announcements

As always, first cab off the rank for the week are the product announcements, and this week  we have the release of Amazon WorkLink. The AWS Product page states that ” With Amazon WorkLink, employees can access internal web content as easily as they access any public website, without the hassle of connecting to their corporate network. When a user accesses an internal website, the page is first rendered in a browser running in a secure container in AWS. Amazon WorkLink then sends the contents of that page to employee phones as vector graphics while preserving the functionality and interactivity of the page. This approach is more secure than traditional solutions because internal content is never stored or cached by the browser on employee phones, and employee devices never connect directly to your corporate network.”

This product is a potential game changer for several really common use cases. You could potentially even replace your whole VPN with this solution, not only reducing your operational footprint, but also providing a more secure, easier to use solution for your users. Unfortunately, it’s only currently available in AWS US East (N. Virginia), AWS US East (Ohio), AWS US West (Oregon), and AWS EU (Ireland) but will no doubt be coming to AWS AP Southeast (Sydney) in the future. Look out for our upcoming article where we take a closer look at how Amazon Worklink actually operates and how you can go about setting it up, but in the meantime for those wanting more details you can visit the official product page  here and as always there is a fantastic blog article written by Jeff Barr available on the AWS blog https://aws.amazon.com/blogs/aws/amazon-worklink-secure-one-click-mobile-access-to-internal-websites-and-applications/

Product Updates

While we are on the topic of product announcements and changes, AWS announced on Thursday that you can now make use of TLS (Transport Layer Security) connections that terminate at a Network Load Balancer. Not only does this allow for simplified management improved compliance, but also results in cleaner access logs (as your NLB logs can now contain TLS termination details) as well as Source IP preservation as it will allow you to pass the Source IP address all the way through to your backend servers. For a detailed write up on the future feature as well as a step by step guide on getting started, see Jeff Barr’s Blog available https://aws.amazon.com/blogs/aws/new-tls-termination-for-network-load-balancers/

Service Updates

The next announcement is a big one for Australia, with Amazon Web Services and the Australian Cyber Security Centre (ACSC) announcing on Wednesday that the ACSC has awarded AWS PROTECTED certification. This is currently the highest data security certification available in Australia for cloud provided on the Certified Cloud Services List (CCSL). What’s really exciting about the announcement is that AWS have managed to get 42 services included within the certification (including but not limited to Lambda, Key Management Services and GuardDuty) and that there is no additional prices or charges for PROTECTED certification. As always when it comes to certifications on AWS, visit your AWS Artifact page (available here) to get the specific details around the certification.

And finally for this week’s roundup, is an update that’s going to make validating the health of your AWS environment a little bit easier, the announcement that AWS Trusted Advisor has expanded functionality with new Best Practices checks. For those who are not aware, “AWS Trusted Advisor is an application that draws upon best practices learned from AWS’ aggregated operational history of serving millions of AWS customers. Trusted Advisor inspects your AWS environment and makes recommendations for saving money, improving system performance, and closing security gaps. AWS recently announced that they have added a range of new checks to validate best practices within your AWS environment including, DynamoDB, Route53 and Driver versions for windows instances. For detailed information on the Trusted Advisor Best Practice Checks, you can look them up here. As always, please feel free to reach out to us directly (via the “Contact Us” link at the top of the page) if you would like assistance in benchmarking or managing your AWS environment.

And that’s it for the AWS update for Friday the 25th of January 2019. Please keep an eye out for our weekly updates on the happenings within the AWS eco-system and for the continuation of our blog series on developing and deploying a serverless SPA environments on AWS using Static Site Generators.

 

 

 

Weekly AWS update: Friday 18th January 2019

Another week into 2019 and we have more activities happening in the world of Amazon Web Services. This article continues the weekly series we are doing this year to help customers with a brief overview of the happenings within the AWS world over the last week to try and help surface some of the more important announcements. This is not meant to be an exhaustive list of all the updates and changes to the AWS eco-system, but simply a summary of changes that might have an impact on the business and trends we at Kloud are seeing within the industry. As always, if you would like to talk to somebody about how you might be able to leverage some of these new technologies and services, please feel free to reach out using the contact link at the top of the page.

The key take away’s from this week are:

  • Introduction of AWS Backup
  • AWS CodePipeline Now Supports Deploying to Amazon S3
  • Addition of support for Appium Node.JS and Appium Ruby for AWS Device Farm

First cab off the rank for this week is another product announcement with the release of AWS Backup. The AWS Product page states that ” AWS Backup is a fully managed backup service that makes it easy to centralize and automate the back up of data across AWS services in the cloud as well as on premises. With AWS Backup, protecting your AWS resources, such as Amazon EFS file systems, is as easy as a few clicks in the AWS Backup console. Customers can configure and audit the AWS resources they want to back up, automate backup scheduling, set retention policies, and monitor all recent backup and restore activity.

This is potentially a exciting product as will allow for customer to an AWS native service rather than relying on 3rd party solutions. Unfortunately, it’s not available in Sydney yet (currently only US East (Northern Virginia), US East (Ohio), US West (Oregon), and EU (Ireland) regions). More details are available here and as always their is a fantastic blog article written by Jeff Barr available on the aws blog https://aws.amazon.com/blogs/aws/aws-backup-automate-and-centrally-manage-your-backups/

While on the topic of product announcements, in addition to the availability of AWS backup there have also been a series on announcements around other AWS services supporting the new product suite. So far, we have seen announcements around support for:

  • Amazon Elastic File System (EFS) – Link here
  • AWS Storage Gateway Integrates – Link here
  • Amazon DynamoDB  – Link here
  • Amazon Elastic Block Storage (EBS) – Link here

A little closer to my personal needs, AWS announced this morning that AWS CodePipeline Now Supports Deploying to Amazon S3. “AWS CodePipeline is a fully managed continuous delivery (CD) service that lets you automate your software release process for fast and reliable updates. You can now use CodePipeline to deploy files, such as static website content or artifacts from your build process, to Amazon S3.” This feature has a host of possibilities including some changes to our current blog series on hosting Static Websites using S3 and CloudFront. (Watch out for the next instalment coming out on Tuesday)

And finally for this weeks roundup, is an update that’s sure to make a lot of developers happy with the announcement that AWS Device Farm now support Appium for Node.JS and Ruby. For those who are not aware, “AWS Device Farm is an app testing service that lets you run automated tests and interact with your Android, iOS, and web apps on real devices. Device Farm supports running automated tests written in most of the popular test frameworks including Espresso, XCTest, Appium Python and Appium Java. Starting today, you can use Device Farm to execute your tests written in Appium Node.js and Appium Ruby against real devices. You can customize any step in the test process using these frameworks through a simple configuration file.” For more information on getting started with AWS Device Farm, you can follow the getting started guide available here

And that’s it for the AWS update for Friday the 18th of January 2019. Please keep an eye out for our weekly updates on the happenings within the AWS eco-system and for the continuation of our blog series on developing and deploying a serverless SPA environments on AWS using Static Site Generators.

 

 

Deploying and managing a Static Website using Gatsby, S3 and GitLab: Part 1

Running a website has always been a pain for organisations. From renting servers, installing and managing software, security patches and version upgrades… not to mention the 24×7 support team needed to monitor it and fix it when it breaks. All this required effort sets the bar quite high for launching a new website, even when all you want to do is run a simple landing site or launch your own blog. Well, through the wonders of the AWS cloud and a few simple services… it doesn’t need to be that hard.

This article forms the first in a series which will look at how we can leverage AWS to host a fully functional website as well as implement a simple, easy to manage pipeline using the AWS Development Tools suite of products to facilitate updates and the publishing of new content. Over a series of four blog articles, we will look at:

  • How we can build and maintain a website using Static Website Generators such as Hugo.io or Gatsby
  • The process for setting up an AWS account that will host our website and the associated AWS services.
  • How we can build out a simple CI/CD pipeline using AWS CodeCommit, CodePipeline and other AWS tools to host the source code and perform updates for us triggered from a simple code check-in.

By the end of this series we will have a solution that looks like this:

  • S3 Bucket for hosting our new Website, generated by out Static Content Generator
  • CloudFront Distribution for caching content and speeding up load times
  • Code Pipeline for automated Deployment
  • A bucket containing all associated log files from the website, CDN and Deployment pipeline

Now that we can see where we are headed, we can take a look at the first part of the solution…. What is a static Site Generator, why do we want one and how do we get started.

What are Static Site Generators

Put simply a static website site is a collection of pages contained in basic HTML files and requiring no server-side activities… A static site generator (or SSG) is a compromise between using a hand-coded static site and a full CMS. You generate an HTML-only website using raw data and templates. The resulting build is transferred to your live server. So, what does this mean in real life?

Basically, any website written in HTML could be considered a Static Website… but would make it hard for non-web developers to make changes to the website, such as adding new products or writing articles. By using a Static Website Generator, a content creator can write an article in his/her preferred language (such as Markdown) and then run the Generator which in turn will create a HTML page which can be published. This enables anybody in the organisation to write or update content for the website without having to understand the HTML/CSS/JS that drives it.

And this is where the solution starts to get interesting, each page in turned into its own HTML document. This is quite different from most CMS products (such as WordPress) which require servers to interrupt requests for pages/articles and then request the required data from SQL Servers. Instead, we simply have a collection of html documents which means no requirement for a server or a database and this is where the notation of a “static Website” comes from… all the content on the pages are already defined and not dynamically generated as users browse to the site.

You may have already started to realise one of the biggest benefits to statically generated sites. Because you don’t require a server, CMS software or a database… theirs a lot less of an attack surface, resulting in fewer security concerns. In addition, because the content of the page doesn’t need to be generated at run time, the sites will typically load MUCH faster.

Getting Started with a Static Site Generator

For our example, we are going to use Gatsby as our Static Site Generator (available https://www.gatsbyjs.org/) as it provides a wide range of plugins, features and fantastic documentation which is always helpful for those starting out.

Because of the high quality of the documentation I’m not going to go through the exact steps required to setup your environment with Gatsby, I’m simply going to point you to “Part zero” of the Gatsby Tutorial available https://www.gatsbyjs.org/tutorial/part-zero/. By the end of part zero, you should have a working site on your local machine ready for use in the following articles.

Where to from here?

In this article we have taken a quick look at what we are trying to achieve over the coming articles, what static sites are and why we might want to use a static site generator. We should also have Gatsbyjs setup on our machine and have a website working (although not a very pretty one). In the following articles, we will setup our AWS environment with a Code Repository in which to host our source code, we’ll go about setting up our AWS environment with S3 buckets, Route 53 and CloudFront to host our new website and finally we’ll setup a pipeline which will automate the process of deploying and updating our website for us.

  • Part 2: AWS Account and CodeCommit (scheduled release Thursday 17th January 2019)
  • Part 3: S3 Static Website Hosting and CloudFront Distribution (scheduled Monday 21st January 2019)
  • Part 4: CodePipeline and CodeBuild (scheduled release Wednesday 23rd January 2019)

Weekly AWS update: Friday 11th January 2019

Well, for a lot of people (myself included) we have now finished our first week back at work for 2019 and the teams over at Amazon Web Services are already hard at work releasing new products, services and even a couple of price reductions to help start off the 2019 year. This article forms the first of a weekly series we will be doing this year to help customers with a brief overview of the happenings within the AWS world over the last week to try and help surface some of the more important announcements. This is not meant to be an exhaustive list of all the updates and changes to the AWS eco-system, but simply a summary of changes that might have an impact on the business and trends we at Kloud are seeing within the industry. As always, if you would like to talk to somebody about how you might be able to leverage some of these new technologies and services, please feel free to reach out using the contact link at the top of the page.

The key take away’s from this week are:

  • Amazon DocumentDB (with MongoDB Compatbility)
  • Amazon Neptune is now available in Asia Pacific (Sydney)
  • AWS Step Functions now support resource tagging
  • AWS Fargate Price Reductions

First cab off the rank are product announcements, and the talk of the town this week is the release of Amazon DocumentDB (with MongoDB Compatibility). The AWS Product page states that “Amazon DocumentDB (with MongoDB compatibility) is a fast, scalable, highly available, and fully managed document database service that supports MongoDB workloads.” more details are available here and as always their is a fantastic blog article written by Jeff Barr available on the aws blog https://aws.amazon.com/blogs/aws/new-amazon-documentdb-with-mongodb-compatibility-fast-scalable-and-highly-available/

While on the topic of product announcements, it was also announced that a number of new AWS services and products have recently gained compliance eligibility. While some of these have been around for a while, most of them are recently released or announced products and will be launching with this capability enabling customer to leverage them in complaint environments from day 1. The list of services and their associated compliance eligibility is outlined below:

  • Amazon DocumentDB (with MongoDB compatibility) [HIPAA, PCI, ISO, SOC 2]
  • Amazon FSx [HIPAA, PCI, ISO]
  • Amazon Route 53 Resolver [ISO]
  • AWS Amplify [HIPAA, ISO]
  • AWS DataSync [HIPAA, PCI, ISO]
  • AWS Elemental MediaConnect [HIPAA, PCI, ISO]
  • AWS Global Accelerator [PCI, ISO]
  • AWS License Manager [ISO]
  • AWS RoboMaker [HIPAA, PCI, ISO]
  • AWS Transfer for SFTP [HIPAA, PCI, ISO]

A little closer to home and on Wednesday it was announced that Amazon Neptune is now available in the Asia Pacific (Sydney) region. For those who don’t remember, Amazon Neptune was announced at AWS Re:Invent 2017 and is a fast, reliable, fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets and supports the popular graph models Property Graph and W3C’s RDF. You can start working with Amazon Neptune today and prices (for an db.r4.large instance) start at $0.42 USD per Hour.

And finally in the realm of product announcements we have that AWS Step Functions now support Resource Tagging. AWS Step Functions is a workflow automation service that lets you quickly connect and coordinate multiple AWS services and applications. By using tags with Step Functions, you can define and associate labels with your Step Functions state machines, which make it easier to manage, search for, and filter resources. Further information can be found in the Step Functions Developer guide here

Next in line are Price reductions and the news in this space is super exciting with AWS Fargate getting price reductions of up to 50%. Quoting from the AWS Compute Blog article written by Nathan Peck, “Effective January 7th, 2019 Fargate pricing per vCPU per second is being reduced by 20%, and pricing per GB of memory per second is being reduced by 65%. Depending on the ratio of CPU to memory that you’re allocating for your containers, you could see an overall price reduction of anywhere from 35% to 50%.” More information is available here https://aws.amazon.com/blogs/compute/aws-fargate-price-reduction-up-to-50/ and the below table show how much you can expect to save on a variety of Fargate instance sizes:

vCPUGB MemoryEffective Price Cut
0.250.5-35.00%
0.252-50.00%
0.51-35.00%
0.54-50.00%
12-35.00%
18-50.00%
24-35.00%
212-47.00%
216-50.00%
48-35.00%
416-42.50%
430-49.30%

And that’s it for the update for the first week back of 2019. Please keep an eye out for our weekly updates on the happenings within the AWS eco-system and also for our up-coming blog series on developing and deploying a serverless SPA environments on AWS using Static Site Generators.

 

AWS DeepRacer – Tips and Tricks – Battery and SSH

If you would like to know more about what the AWS DeepRacer is, please refer to my previous post:  AWS DeepRacer – Overview

I was going to do an unboxing video, but Andrew Knaebel has done a well enough job of that and posted it on YouTube, so I’ll skip that part and move onto more detail on getting up and running with the AWS DeepRacer. 

A lot of this is covered in the AWS DeepRacer Getting Started Guide so I’ll try and focus on the places where it was not so clear.

Before we get started there are a few items you will need to follow through this blog. They are:

  • AWS DeepRacer physical robot
  • USB-C Power adapter
  • PowerBank with USB-C connector
  • 7.4V 1100mAh RC car battery pack
  • Balance Charger for RC battery pack
  • If not in the US, a power socket adapter

Connecting and Charging

When I followed the instructions in the AWS getting started guide, I found that the instructions left out a few minor details that make your life easier going forward. Below is a way of avoiding pulling apart the whole car to charge it every time.

1. Install the USB-C PowerBank on top of the vehicle with the USB-C port closer to the right-hand side, closer to the USB-C port on the vehicle

2. Install the RC battery by taking off the 4 pins and (GENTLY as there are wired connected) move the top compute unit to the side like below, ensure you leave the charging cable and power sockets available as you don’t want to be unpinning the car every time

3. Connect the USB-C power adaptor to the USB-C port on the PowerBank and connect the Balance charger to the charging cable of the battery

4. Wait for the PowerBank to have four solid lights on it to signify its charged and the charge light on the balance charger to be off to let you know the RC battery is ready

Opening up SSH to allow for easier model loads

I’m sure that AWS are working hard to either include AWS IoT Green Grass capabilities, to allow users to push their latest model to the AWS DeepRacer. But for now it looks like that isn’t an option

Another nice feature would be the ability to upload the model.pb file via the AWS DeepRacer local Web server. Alas we currently need to put files onto USB sticks.

There is another way for the moment, and that’s to open up SSH on the AWS DeepRacer firewall and using SCP to copy the file into the correct location.

Firstly, you will need to log in to the Ubuntu server on the AWS DeepRacer. For instructions on how to achieve this please refer to my previous post, AWS DeepRacer – How to login to the Ubuntu Computer Onboard

1. Once logged in, open up a terminal

2. type in: sudo ufw allow ssh
from now on you will be able to login via SSH

3. On another machine, you should be able to now login via SSH: ssh deepracer@<ip address of your DeepRacer>

4. Copy over your model.pb a file to your DeepRacer home directory via SCP (I used WinSCP)

5. Move the file into a folder inside /opt/aws/deepracer/artifacts/<folder name of your choice>/model.pb

6. Your Done! Enjoy being able to load a model without a USB stick

There are other Tips and Tricks coming as I experience the AWS DeepRacer ecosystem.

Follow Us!

Kloud Solutions Blog - Follow Us!