@ndlib/ndlib-cdk2
TypeScript icon, indicating that this package has built-in type declarations

1.0.31 • Public • Published

NDLIB CDK

Maintainability Test Coverage

Stack Tags

Creates an Aspect that will apply stack level tags to all stacks in the application based on our defined required tags. Values for these tags are read from the following expected context keys:

Key Value
projectName Name of the overall project that the stacks belong to
description A description for the stacks
contact Contact information for the person(s) deploying this stack
owner Name or CLID of the person deploying this stack

Example usage:

import cdk = require('aws-cdk-lib/core')
import { StackTags } from '@ndlib/ndlib-cdk2'
const app = new cdk.App()
Aspects.of(app).add(new StackTags())

HTTPS Application Load Balancer

Creates a common construction of an ALB that will redirect all traffic from HTTP to HTTPS, and will by default respond with a 404 until additional listener rules are added. This can be used within a single stack that routes to multiple services in that stack, or it can be created in a parent stack where one or more child stacks then attach new services to the ALB.

Example usage:

import cdk = require('aws-cdk-lib/core')
import ec2 = require('aws-cdk-lib/aws-ec2')
import { HttpsAlb } from '@ndlib/ndlib-cdk2'
const stack = new cdk.Stack()
const vpc = new ec2.Vpc(stack, 'Vpc')
const alb = new HttpsAlb(stack, 'PublicLoadBalancer', {
  certificateArns: ['MyCertificateArn'],
  internetFacing: true,
  vpc,
})

Archive S3 Bucket

Creates an S3 Bucket with no public access that immediately transitions all deposited objects to Glacier or Glacier Deep Archive. The public access policies can be overridden should it be necessary.

The following example will immediately move objects to Glacier:

import cdk = require('aws-cdk-lib/core')
import { ArchiveBucket } from '@ndlib/ndlib-cdk2'

const stack = new cdk.Stack()
const bucket = new ArchiveBucket(stack, 'Bucket')

The following example will immediately move objects to Glacier Deep Archive, while overriding the default public access behavior of the bucket:

import cdk = require('aws-cdk-lib/core')
import { ArchiveBucket } from '@ndlib/ndlib-cdk2'

const stack = new cdk.Stack()
const overrides = { blockPublicAccess: BlockPublicAccess.BLOCK_ACLS, deepArchive: true }
const bucket = new ArchiveBucket(stack, 'Bucket', { ...overrides })

CodePipeline Email Notifications

Adds a basic email notification construct to watch a CodePipeline for state changes. Note: Currently does not watch any of the actions for specific state changes.

Example message:

The pipeline my-test-pipeline-142PEPTENTABF has changed state to STARTED. To view the pipeline, go to https://us-east-1.console.aws.amazon.com/codepipeline/home?region=us-east-1#/view/my-test-pipeline-142PEPTENTABF.

Example usage:

import cdk = require('aws-cdk-lib/core')
import { Pipeline } from 'aws-cdk-lib/aws-codepipeline'
import { PipelineNotifications } from '@ndlib/ndlib-cdk2'
const stack = new cdk.Stack()
const pipeline = Pipeline(stack, { ... })
const notifications = new PipelineNotifications(stack, 'TestPipelineNotifications', {
  pipeline,
  receivers: 'me@myhost.com',
})

CodePipeline Slack Status Notifications

Adds a slack notification construct to watch a CodePipeline for state changes. Note: Currently does not watch any of the actions for specific state changes.

Example message:

Pipeline example-pipeline-deployment-DeploymentPipelineEDAF206A-19MVGYZ0MIPPRjust SUCCEEDED.

Example usage:

import cdk = require('aws-cdk-lib/core')
import { Pipeline } from 'aws-cdk-lib/aws-codepipeline'
import { SlackPipelineStatusNotifications } from '@ndlib/ndlib-cdk2'
const stack = new cdk.Stack()
const pipeline = Pipeline(stack, { ... })

if(props.slackChannelId){
  new SlackPipelineStatusNotifications(this, 'SlackPipelineStatusNotifications', {
    pipeline,
    // messageText: 'example of additional message text', // optional
    slackChannelId: props.slackChannelId,
    slackChannelName: props.slackChannelName,
    // slackNotifyTopicArn: 'some valid topic arn', // optional, defaults to: Fn.importValue('slack-pipeline-status-notify:TopicArn')
  })
}

Service Level Objectives

Creates Cloudwatch Dashboards and Alarms from a list of SLOs based on the Google SRE workbook for Alerting on SLOs

const slos = [
  {
    type: 'CloudfrontAvailability',
    distributionId: 'E123456789ABC',
    title: 'My Website',
    sloThreshold: 0.999,
  },
  {
    type: 'CloudfrontLatency',
    distributionId: 'E123456789ABC',
    title: 'My Website',
    sloThreshold: 0.95,
    latencyThreshold: 200,
  },
]
const stack = new cdk.Stack()

// Create a dashboard that shows the 30 day performance of all of our SLOs
const perfDash = new SLOPerformanceDashboard(stack, 'PerformanceDashboard', {
  slos,
  dashboardName: 'PerformanceDashboard',
})

// Create the multi-window alarms for each of the SLOs. This will also create an SNS topic that will
// receive the alerts. The alarm will include links to the dashboards and runbooks given in its
// description.
const alarms = new SLOAlarms(stack, 'Alarms', {
  slos,
  dashboardLink: 'https://console.aws.amazon.com/cloudwatch/home?region=us-east-1#dashboards:name=My-Website',
  runbookLink: 'https://github.com/myorg/myrunbooks',
})

For more info see the Service Level Objectives Readme

Artifact S3 Bucket

Creates an S3 Bucket with no public access and requires secure transport to take any action. This is a common construct across many applications, where a build process requires a place to store its output.

The following example will create a standard artifact bucket:

import cdk = require('aws-cdk-lib/core')
import { ArtifactBucket } from '@ndlib/ndlib-cdk2'

const stack = new cdk.Stack()
const bucket = new ArtifactBucket(stack, 'Bucket')

Docker CodeBuild Action

This is a factory helper method to ease the creation of CodeBuild projects that use authenticated methods to pull from DockerHub. This method requires following the Official AWS Documentation on solving the "error pulling image configuration: toomanyrequests" error in the "Store your DockerHub credentials with AWS Secrets Manager" section.

The following example will create a Linux CodeBuild project, using DockerHub authentication credentials stored in Secrets Manager (under the /test/credentials path) and the PipelineProject CDK construct, that leverages the alpine:3 DockerHub Image:

import cdk = require('aws-cdk-lib/core')
import { PipelineProject } from 'aws-cdk-lib/aws-codebuild'
import { DockerCodeBuildAction } from '@ndlib/ndlib-cdk2'

const stack = new cdk.Stack()
const project = new PipelineProject(stack, 'test-project', {
  environment: {
    buildImage: DockerCodeBuildAction.fromLinuxDockerImage(stack, 'alpine-build-image', {
      image: 'alpine:3',
      credentialsContextKeyName: '/test/credentials',
    }),
  },
})

The following example will create a Windows CodeBuild project, using DockerHub authentication credentials stored in Secrets Manager (under the /test/credentials path) and the Project CDK construct, that leverages the mcr.microsoft.com/windows/servercore/iis DockerHub Image:

import cdk = require('aws-cdk-lib/core')
import { Project, BuildSpec } from 'aws-cdk-lib/aws-codebuild'
import { DockerCodeBuildAction } from '@ndlib/ndlib-cdk2'

const stack = new cdk.Stack()
const project = new Project(stack, 'test-project', {
  buildSpec: BuildSpec.fromObject({
    phases: {
      build: {
        commands: ['echo hello'],
      },
    },
    version: '0.2',
  }),
  environment: {
    buildImage: DockerCodeBuildAction.fromWindowsDockerImage(stack, 'iis-build-image', {
      image: 'mcr.microsoft.com/windows/servercore/iis',
      credentialsContextKeyName: '/test/credentials',
    }),
  },
})

Static Host

Creates a CloudFront, buckets, and other relevant resources for hosting a site with a static host.

Lambda@Edge functions can also be connected to the CloudFront, but must be defined first in your stack definition. See Edge Lambdas section for reusable lambdas that may be particularly relevant for static hosts.

Example usage:

import cdk = require('aws-cdk-lib/core')
import { Certificate } from 'aws-cdk-lib/aws-certificatemanager'
import { StaticHost } from '@ndlib/ndlib-cdk2'

const stack = new cdk.Stack()
const host = new StaticHost(stack, 'MyStaticHost', {
  hostnamePrefix: 'my-site',
  domainName: 'domain.org',
  websiteCertificate: Certificate.fromCertificateArn(stack, 'ACMCert', 'arn:aws:acm:::certificate/example'),
  indexFilename: 'index.shtml',
  edgeLambdas: [],
})

Edge Lambdas

These lambdas are standardized code which may be useful for multiple projects. They should be paired with one or more cloudfronts.

The current list of edge lambdas are:

  • SpaRedirectionLambda – Requesting a page other than the index redirects to the origin to serve up the root index file. This is useful for SPAs which handle their own routing.

These are especially useful to pair with a StaticHost construct. Alternatively, you can attach a custom lambda by implementing the IEdgeLambda interface. It will create the function, as well as define a Behavior which can then be used in configuring a CloudFront.

See Static Host for example.

Newman Runner

This construct creates a CodeBuild project and an action which can be used in a pipeline to run newman tests. This is typically used for smoke tests to verify the service deployed by the pipeline is operational.

Example usage:

import { Pipeline } from 'aws-cdk-lib/aws-codepipeline'
import { Stack }from 'aws-cdk-lib'
import { NewmanPipelineProject } from '@ndlib/ndlib-cdk2'

const stack = new Stack()
const pipeline = Pipeline(stack, 'MyPipeline')
const appSourceArtifact = new codepipeline.Artifact('AppCode')
// ...
const newmanRunner = new NewmanRunner(stack, 'TestProject', {
  sourceArtifact: appSourceArtifact,
  collectionPath: 'test/newman/collection.json',
  collectionVariables: {
    hostname: 'https://www.example.com',
    foo: 'bar',
  },
})
pipeline.addStage({
  stageName: 'Build',
  actions: [buildAction, newmanRunner.action, approvalAction],
})

Pipeline S3 Sync

This construct creates a codebuild project which takes an input artifact and pushes the contents to an s3 bucket. This is useful in pipelines for static sites, as well as sites that need to be compiled earlier in the pipeline. The concept is similar to a BucketDeployment, but works around the issue of deploying large files since BucketDeployment relies on a Lambda.

Example:

import { Artifact, Pipeline } from 'aws-cdk-lib/aws-codepipeline'
import { Stack }from 'aws-cdk-lib'
import { PipelineS3Sync } from '@ndlib/ndlib-cdk2'

const stack = new Stack()
const pipeline = Pipeline(stack, 'MyPipeline')
const appSourceArtifact = new Artifact('AppCode')
// ...
const s3Sync = new PipelineS3Sync(this, 'S3SyncProd', {
  bucketNamePrefix: this.stackName,
  bucketParamPath: '/all/stacks/targetStackName/site-bucket-name',
  cloudFrontParamPath: '/all/stacks/targetStackName/distribution-id',
  inputBuildArtifact: appSourceArtifact,
})
pipeline.addStage({
  stageName: 'Deploy',
  actions: [s3Sync.action],
})

PipelineS3Sync also handles assigning content types based on filename patterns. To do so, provide an array of patterns with the content type they should be assigned like so:

new PipelineS3Sync(this, 'S3SyncProd', {
  // ...
  contentTypePatterns: [
    {
      pattern: '*.pdf',
      contentType: 'application/pdf',
    },
    {
      pattern: '*.csv',
      contentType: 'text/plain',
    },
  ],
})

Source Watcher

The SourceWatcher construct creates necessary resources to monitor a GitHub repository for changes. Based on the changes that are made, one or more pipelines may be invoked according to the configuration. This is similar to how CodePipelines can have a Webhook on the source action, except that it allows for conditional triggering depending on where the changes reside within the repo. Therefore, if multiple pipelines share a repo, a change to files which only impact one do not have to trigger the unmodified pipeline.

Example:

import { Stack }from 'aws-cdk-lib'
import { SourceWatcher } from '@ndlib/ndlib-cdk2'

const stack = new Stack()
new SourceWatcher(stack, 'TestProject', {
  triggers: [
    {
      triggerPatterns: ['my/test/**/pattern.js', 'example/*.*'],
      pipelineStackName: 'pipeline-a',
    },
    {
      triggerPatterns: ['src/anotherExample.ts'],
      pipelineStackName: 'pipeline-b',
    },
  ],
  targetRepo: 'ndlib/myRepo',
  targetBranch: 'main',
  gitTokenPath: '/all/github/ndlib-git',
  webhookResourceStackName: 'github-webhook-custom-resource-prod',
})

NOTE: webhookResourceStackName refers to a stack which will manage contains the backend for a CustomResource webhook. Prior to using this construct, an instance of ndlib/aws-github-webhook should be deployed to the AWS account. One stack can be used for any number of SourceWatcher constructs.

EC2 server with access rules

The EC2withDatabase construct builds an EC2 server. The basic concept is to build the server with security group access to, possibly, several AWS RDS database servers. The server is built within an existing VPC. Parameters allow for AMI ID, instance type, root disk storage, networking, and security group rules for the server upon build. The server is created with the OS only; further configuration will need to be performed, often using ansible.

Example usage:

import 'source-map-support/register';
import { App, Aspects }from 'aws-cdk-lib';
import * as ec2 from 'aws-cdk-lib/aws-ec2';
import { EC2withDatabase } from '@ndlib/ndlib-cdk2';

const app = new App();

new EC2withDatabase (app, 'StackName', {
  env: { account: 'AccountID', region: 'us-east-1' },
  amiId: "Valid AMI Id",
  availabilityZones: [ "us-east-1c" ],
  backup: 'True',
  instanceClass: ec2.InstanceClass.T3A,
  instanceSize: ec2.InstanceSize.MEDIUM,
  instanceName: "InstanceName",
  keyName: "libnd",
  privateIpAddress: "IPAddress",
  publicSubnetIds:  [ "ValidSubnet in VPC that matches up with AvailabilityZone" ],
  domainName: "Local domain name",
  CnameList: [ Array of cnames/additional IPs to be added ],
  volumeSize: Number (in GB),
  vpcId: "Valid VPC ID",
  SGDBAccessRules: [
    {
      "database": "sg-088a1b9d4918effb3",       // Security groups that provide access to an RDS database
      "port": 3306,                             // Can be multiple or none at all
      "description": "MySQL access from jumpbox"
    },
  ],
  SGIngressRules: [
      {
      "ipv4": "10.32.0.0/11",                   // Additional access rules to allow connection to the server
      "port": 22,                               // Can be multiple or none at all
      "description": "SSH access from campus"
      },
  ],
});

RDS Build - Postgres

Creates a Postgres Serverless V2 database cluster with one writer instance. Several pieces of information must be sent in to the build procedure. This first pass was created specifically for DEC and may change over time.

new PostgresRDSConstruct(this, rds-peered-postgres-dec-${props.namespace}, { availabilityZones: vpc.availabilityZones, // Must be passed in; must align with subnets dbClusterIdentifier: 'whatever-you-want-the-name-to-be, // Name of the database cluster dbFullVersion: AuroraPostgresEngineVersion.VER_16_4, // Can be any valid version of Postgres pgVersion: AuroraPostgresEngineVersion.VER_16_4, // Gives option of parameter group to be different rdsPostgresSecret: whatever-you-want-secret-name, // Secret with Postgres user password created privateSubnetIds: this.vpc.privateSubnets.map(subnet => subnet.subnetId), // Must be passed in SGIngressRules: [ // Additional rules for database access; can be empty { "ipv4": "10.58.250.75/32", "port": 5432, "description": "Postgres access to conductor server" }, ], useVpcId: this.vpc.vpcId, // env: props.env, }) Tags.of(this.PostgresDatabase).add('Backup', 'True') // The RDS should be tagged for backup

Slack Integration

Allows pipelines to send a message to Slack to solicit pipeline approval. The message will include information such as the pipeline, the github repos included in the pipeline, a link to the github commits, a link to the test version that was deployed, as well as a link to the prod version that will be replaced. Finally there will be buttons to Approve or Reject the pipeline.

Once the user clicks Approve or Reject in Slack, a post message is sent to a lambda to cause the pipeline to be Approved or Rejected.

Note that this replaces the ManualApprovalAction in a code pipeline.

Example usage:

import { SlackIntegratedManualApproval } from 'aws-cdk-lib/aws-ec2'
import { Topic } from 'aws-cdk-lib/aws-sns'

...
// Create a Topic
    const importedSlackNotifyTopicArn = Fn.importValue(props.slackNotifyTopicOutput)
    const approvalTopic = Topic.fromTopicArn(this, 'SlackTopicFromArn', importedSlackNotifyTopicArn)

// Call the SlackIntegratedManualApproval
   const approveTestStackAction = new SlackIntegratedManualApproval({
     actionName: 'ApproveTestStack',
     notificationTopic: approvalTopic,
     runOrder: 99,
     customData: {
       approveButtonText: 'Approve',
       githubSources: [{ owner: props.infraRepoOwner, sourceAction: infraSourceAction }], // Note that the sourceAction may be either a GitHubSourceAction (GitHub version 1 authentication - no longer recommended) or a CodeStarConnectionsSourceAction (GitHub version 2 authentication - recommended)
       attemptTarget: 'https://prodUrl.somewhere.edu',
       rejectButtonText: 'Reject',
       slackChannelId: props.slackChannelId || '',
       slackChannelName: props.slackChannelName,
       successfulTarget: 'https://testUrl.somewhere.edu',
     },
   })
...

Readme

Keywords

Package Sidebar

Install

npm i @ndlib/ndlib-cdk2

Weekly Downloads

269

Version

1.0.31

License

Apache-2.0

Unpacked Size

12.5 MB

Total Files

4352

Last publish

Collaborators

  • thanstra
  • robertjonfox
  • danthewolfe
  • stevemattison
  • nealfwilliams-nd
  • michaelrunyonnd