How to run, test or debug a Docker image locally from AWS ECR

You’ve created a new Docker container and pushed it to ECR. The container is then deployed via ECS to your environment, but you see a ‘502 Bad Gateway’ error in your browser. One reason could be the container. Here are a few commands to run and test your container locally to debug the issue.

Before you can download and run the Docker image you need to log in to ECR. To retrieve the ECR login token use this AWS CLI command.

aws ecr get-login --region eu-west-1 --no-include-email --profile [profile-name]

This returns the docker login command with the token. Copy the output and execute the command to log in to the registry.

Now you can use the Docker CLI to download and run an image from ECR.

docker run -d -p 8080:80 [aws_account_id].dkr.ecr.[region].amazonaws.com/[docker-image-name]:[tag]

You can now try localhost:8080 to view the image in your browser.

To SSH into a running container you can use the following commands.

List running Docker images:
docker ps

From the list note the image name and use this command to SSH into the container.
docker exec -it [image-name] /bin/bash

Return low level information on the Docker image:
docker inspect [image-id]

WordPress, Docker, Terraform, Packer and AWS using ECS

In this post, I’ll explain how I containerised WordPress with custom themes and additional plugins as a Docker application and then deploy to AWS and ECS.

In this example, I’ve migrated the National Archives’ blog to Amazon Web Services. WordPress is the blogging platform, which is made stateless using Docker. Docker is a software platform to create, deploy and manage virtualised application containers on a common operating system.

I use Terraform to create the infrastructure in AWS, so it’s easy to track all the resources as code. ECS or Elastic Container Service is described by Amazon as a container orchestration service that supports Docker containers. For my example, ECS is used to run and scale the WordPress Docker application. To deploy the Docker container to ECS I’m using Packer and ECR, Elastic Container Registry.

Finally, with the use of a plugin, all the static media resources used by WordPress is delivered via CloudFront and S3 bucket (CDN).

AWS Infrastructure

I created a Terraform module, which stores the code that handles the creation of a group of resources needed for the WordPress infrastructure, and then applied it to existing development, test and production environments.

The module creates a ECS cluster, EC2s, ECR, IAM roles and policies, RDS (databases), Route 53 (internal DNS), and Security Groups.

Deploying the Docker container to ECR

I’m using Packer to create, tag and push the WordPress Docker image to ECR. To achieve a scalable image I’ve made the WordPress container stateless, meaning that no data is attached to the host the container is running on. Content is stored on an external database (RDS), media resources are stored in a S3 bucket and environment variables are past into the container at launch.

Packer grabs the latest WordPress image from Docker Hub and installs the desired plugins and themes. It also installs WP CLI. It then tags the image and pushes it to ECR, a repository where the ECS can access and launch the image.

https://gist.github.com/domingobishop/336fb69367829c693b0e372a4cadeb4b

ECS Cluster

ECS or Elastic Container Service is an AWS Service that handles the Docker containers orchestration in the EC2 cluster. When the ECS grabs the Docker image from the ECR and launches a new container into the cluster it uses a Task Definition. The Task Definition tells WordPress about the database, media S3 bucket, CDN URL and environment variables.

https://gist.github.com/domingobishop/15e3d33b1f16f30d894586bc454b7f7e

That’s it!

This is a brief explanation how I containerised WordPress as a stateless Docker application with themes and plugins and then deploy to AWS. Why do this? Here are a few reasons:

  • Rapid design and development iterations via continuous integration
  • Scalable and resilient architecture
  • Enhanced security
  • Faster content delivery

The National Archives’ Design Frontend Toolkit

There was a need from both designers and developers for a single source of The National Archives’ design patterns. The one tool that makes collaborative design easier is the design pattern library.

A pattern library is a broadly accepted guide that codifies the interactive, visual, and copy elements of a user interface and system – a living collection of all of the product’s customer-facing components. Because it captures all of the detailed elements of the design system, collaborative work sessions can focus on user needs.

In response to this, I built a collection of CSS and HTML elements, Frontend Toolkit, for using as part of the application’s frontend – a way of creating flexible and unique layouts whilst also maintaining consistency across The National Archives online.

You can find the Github repo here.

Jenkins AWS Terraform Module – Continuous integration

At the National Archives we are in the process of migrating our digital services to the cloud. We are investigating Jenkins, an automation server, as part of our continuous integration with AWS.

Jenkins is a distributed automation server, generally associated with Continuous Integration (CI) and Continuous Delivery (CD). I’ve created a Jenkins Terraform Module to deploy a Jenkins cluster on AWS. This module creates the architecture (ec2 instance, security group and Elastic IP) and installs Jenkins and associated plugins. Please feel free to review the plugins to remove and add as you please. The two plugins necessary for AWS are CodeDeploy and EC2.

The module is based on ‘AWS’s Set Up a Jenkins Build Server‘. This Jenkins cluster involves one master instance coupled with slave instances.

Peter Quill the Twitter Chatbot

I’m currently experimenting with NLTK’s chat package and Tweepy to build a Twitter Chatbot. I used the examples outlined in NLTK as a starting point and I extended the app to reply to tweets. Peter is a very basic chatbot, which I hope next to incorporate sentiment analysis and machine learning to make his responses more meaningful.

Peter can be found on twitter with the handle @starlord_p.

The code is on Github here.

Hypothesis driven design: workshop part 2

Collaborative design

Hypothesis driven design (HDD) is a way of applying user research and experimentation to validate design choices that are outcome focused and not output focused. Please have a look at ‘Hypothesis driven design: workshop part 1’ for more information about HDD.

Collaborative design is a crucial stage in the process. It allows teams to create design concepts together and help build shared understanding and joint ownership of the design problem and solutions.

Key principles:

  • Everybody gets to design through cross-functional collaboration
  • Low-fidelity prototypes increase collaboration
  • Shared understanding across the team

Workshop part 2

We are aware there are problems with the main navigation menu (mega-menu) on The National Archives’ website. We’ve had feedback from users that the menu is often missed and Google analytics even suggests the click rate is low. For the workshop we decided to use the mega-menu to test run hypothesis driven design.

Following the previous workshop, we had three assumptions:

  1. The mega-menu is hidden, if we make the top level items visible users will be more likely to use it.
  2. The mega-menu is overwhelming, if we make it easier to digest then users will be more likely to find what they are looking for.
  3. The mega-menu content is not user focused, if we design the content to reflect user journeys, then users will be more likely to find what they are looking for.

Which then formed our three hypotheses:

Hypothesis 1

Changing the current mega-menu
to a mega-menu where the top level menu items are visible
will lead to more users interacting with the menu
because it’s no longer hidden behind a button,
and we’ll know that this is true when we see a higher click rate.

Hypothesis 2

Changing the current mega-menu
to a new menu structure where we see fewer menu items at a time
will lead to more users feeling not so overwhelmed
because fewer choices are easier to scan and digest,
and we’ll know that this is true when we see positive feedback from users.

Hypothesis 3

Changing the current mega-menu information architecture
to a new structure that reflects user journeys
will lead to more users finding what they need easier
because the menu is user focused and not how we organise ourselves,
and we’ll know that this is true when we see positive feedback from users.

Review assumptions, hypotheses and metrics

The first step was to review the work from the previous workshop. Are the assumptions still valid? Have we chosen the correct metrics? Do we need to rework the hypothesis?

For the first hypothesis, the group decided that click rate alone wasn’t enough to measure success. We would also need to consider the bounce rate once the user has clicked through.

It was also identified that a benchmark would be needed. If users struggle to find what they need because they are overwhelmed by the amount of content or the IA doesn’t reflect their expectations, then we’d need to user test the current mega-menu and compare the results with the new prototype.

Collaborative design studio

A design studio scenario is a way to bring a cross-functional team together to visualise potential solutions to a design problem.

There are a number of ways to approach the design studio process. For example Lean UX proposes a number of specific techniques. Within our workshop we naturally fell into the following steps:

  1. Group idea generation
  2. Paper prototyping
  3. Iterate and refine
  4. Presentation and critique

We divided into three groups, one for each hypothesis. Each group discussed possible solutions and created a paper prototype, low-fidelity sketches or wireframes. Paper prototypes are useful to maintain flexibility, which allows the team to respond quickly to user feedback whilst testing.

What’s next?

Workshop part 3 will cover iterative processes, user testing to validate each hypothesis and analysing feedback.

 

Hypothesis driven design: workshop part 1

Hypothesis driven design enables good UX experiments

Hypothesis driven design is a way of applying user research and experimentation to validate design choices. We, as a matter of course, user test all our digital products, but do we really know what we are testing and measuring?

The approach helps teams focus on finding solutions that are meaningful to users and to take calculated risks to move a product forward. Making sure design solutions deliver the outcomes we think they will, by using performance data to measure success or failure. And failure is fine, because we learn.

The key principles:

    • Outcome focused – not output focused
      The approach is not to create a deliverable or output (eg, we will create a sign-on form), but to create an outcome (eg, we want to increase the number of sign-ups).
    • Evidence based design – informed thinking
      The hypothesis statement is designed to help prove or disprove an assumption. Through user research and experimentation each hypothesis is tested to see whether we’ve achieved our desired outcomes.
  • Collaborative design – shared understanding and joint ownership
    Collaborative design allows teams to create design concepts together. It helps build shared understanding and joint ownership of the design problem and solutions.

Workshop part 1

We are aware there are problems with the main navigation menu (mega-menu)  on The National Archives’ website. We’ve had feedback from users that the menu is often missed and Google analytics even suggests the click rate is low. For the workshop we decided to use the mega-menu to test run hypothesis driven design.

Declaring assumptions

The first step was to declare our assumptions as a group exercise. In preparation, we had Google analytics reports, user feedback, past attempts to address the issue and our own experiences of using the mega-menu. These helped use form our assumptions.

We broke up into smaller groups and we came up with three assumptions:

    • The mega-menu is hidden, if we make the top level items visible users will be more likely to use it.
    • The mega-menu is overwhelming, if we make it easier to digest then users will be more likely to find what they are looking for.
  • The mega-menu content is not user focused, if we design the content to reflect user journeys, then users will be more likely to find what they are looking for.

How to measure

Next we needed to work out how we would validate each assumption. In each case we decided a combination of Google analytics (click rate) with user feedback will give us the metrics to prove or disprove our assumptions. Obviously, the initial stages of prototyping (paper or wireframe based) wouldn’t have the Google analytic tools, but we felt user feedback would be sufficient.

We also discussed that a high-fidelity prototype will benefit from AB testing, which would give us the quantitative evidence via Google analytics.

The hypothesis statement

The last step of the workshop was to create hypotheses from our assumptions. A hypothesis is designed in this format so assumptions are easier to test.

We wrote our hypotheses by using the following formula:

Changing [ __________ ]
to [ __________ ]
will lead to [ __________ ]
because [ __________ ],
and we’ll know that this is true/false when we see [ __________ ].

This is what the first assumption transformed into a hypothesis statement would look like:

Changing the current mega-menu
to a mega-menu where the top level menu items are visible
will lead to more users interacting with menu
because it’s no longer hidden behind a button,
and we’ll know that this is true/false when we see a higher click rate.

What’s next?

Workshop part 2 will cover collaborative design, prototyping (paper, wire-frame and high-fidelity prototypes), iterative processes and user testing to validate each hypothesis.

References

Why writing a well structured first paragraph or excerpt is important for SEO

We invest a huge amount of effort into writing content, because we know it’s important to our users. But what’s the point of all that hard work if hardly anybody actually finds your content? If your content is written and structured well, your chance to rank well in Google will be higher. Here are a few points for writing content optimised for search engines:

1. The first 160 characters
The first 160 characters of your paragraph needs to cover the main point of the page or post. This way, you make it easy for your reader to figure out what your page or post is about and you tell Google what it’s about. Try the inverted pyramid approach.

2. A compelling and descriptive paragraph
Write a compelling, descriptive and unique paragraph and use plain English (don’t use words like ‘require’ or ‘obtain’). Keep the paragraph active and include a verb.

4. Keywords
Don’t forget to put your focus keyword in the first paragraph! Especially ones you haven’t included in the page title.

5. Avoid less known acronyms
Only use acronyms if they are very common, such as EU and NATO. SA (South Australia) is not a commonly understood acronym. It can be confused with South Africa or South America.

6. Non-alpha characters
Do not use quotes or any non-alpha characters (Google, cuts them out of the meta description).

7. First paragraph/excerpt length
A good first paragraph is short enough to be consumed quickly, but long enough to give some substance and hint at what is to come. There is no hard rule, but this experiment suggests 55-60 words is the ideal length for SEO.

User testing prototypes checklist

Define goals

Define your goals and research questions clearly. Ask yourself; What am I trying to prove or validate? Do I want to validate design choices? Do I want to validate layout? Does the language make sense? Does the functionality assist the user? It helps to break down testing goals into categories (eg functional, editorial and design goals).

Strong simple goals help you to:

  • see if users understand what they need to do and can complete all relevant tasks
  • identify usability issues
  • generate ideas for how to improve

Specify target group

Your research goals with your user personas will help you select your target group; demographic, digital skills, social and economic status and/or education level.

Review the criteria with your team to make sure you’re selecting the right people for the user testing.

Planning

Before planning any testing session, work with your team to agree the research questions, types of users and type of prototype you want to test. Prototypes can take the form of paper, wireframes or interactive user journeys for example.

User testing sessions can take up to 60 minutes, depending on the complexity of the tasks. Allow at least 15 minutes between sessions.

Design the tasks

Tasks need to be designed carefully to make sure they answer your research questions. Well defined goals will make this process easier.

GDS has outlined these points as good test tasks:

  • set a clear goal for participants to try and achieve
  • are relevant and believable to participants
  • are challenging enough to uncover usability issues
  • don’t give away ‘the answer’ or hint at how a participant might complete them

Rehearse and revise the tasks with colleagues. Doing a rehearsal of the test will reduce on-the-day stresses.

Maintain a record of each test session. Ideally, You’d want users to voice their feelings and thoughts out loud as they navigate the test.

Running user test sessions

For each participant, introduce yourself, explain the research and remind them about things like how you are recording the session.

Explain each test task; what you want the participant to do using clear instructions. Ask the participant to tell you their thoughts as they run through the task.

During the task, mostly watch and listen.

At the end of the session, ask follow-up questions about the things you observed and check if the participant has any final thoughts.

Findings

Organise the findings into categories; functionality, editorial, design or per task. Did a majority of users stumble during a particular task? Did the layout leave them completely clueless about how to proceed? Was the language confusing?

Next rank the findings according to severity. Ranking findings helps the team understand how critical each issue is. Don’t forget to include positive findings as well, letting the team know what’s already working.

References

Quick start with Yoast SEO plugin for WordPress and SEO best practices

Yoast offers a very powerful free SEO management plugin for WordPress. The Premium version offers many features, but the free version is more than adequate with a bit of SEO best practice knowledge.

  1. Install Yoast SEO and activate the plugin: https://en-gb.wordpress.org/plugins/wordpress-seo/
  2. Quick setup: Go to SEO dashboard from the menu, click on the General tab and click Open the configuration wizard.
  3. Choose Configure Yoast SEO.
  4. Follow the steps choosing the appropriate options for your site.
  5. Once you’re done you’re ready to go.

SEO best practices

When creating content, a page or post, for your site you’d need to consider how your audience will find your content. Yoast helps you manage the technical optimisation.

Yoast will automatically add meta data based on the content of your page or post. With some tweaking, you can strategically tailor your SEO.

Edit your page or post and scroll down till you see the Yoast SEO panel. Here you will find a preview snippet of how the page would look like in a search engine results page.

You can also edit this meta data by following SEO best practice guidelines:

Meta titles

1. Use no more than 65 characters including spaces
Keep all titles to 65 characters or less (including spaces). This is because search engines truncate (cut off) titles in Google search results over that number.

2. Unique
Make sure your title is unique. It’s not helpful for people if search results show a list of pages with the exact same title.

3. Clear and descriptive
Titles should be clear and descriptive. The title should provide full context so that people can easily see if they’ve found what they’re looking for.

4. Front-load with the most important keywords
Front-load your titles. The most important information and keywords the user is mostly likely to have searched should be at the beginning.

5. Be clever, but not too clever
Avoid puns or wordplay since these can make the content difficult to find.

Meta descriptions

1. 160 characters including spaces
Keep all descriptions to 160 characters (including spaces). Make sure you cover the main point of the page or post within the summary.

2. Keeping it accessible
Summaries should end with a full stop. It can help people who use assistive technology like screen readers.

3. Compelling meta descriptions
Write compelling and unique meta descriptions and use plain English (don’t use words like ‘require’ or ‘obtain’). Avoid duplicate meta descriptions. Keep summaries active and include a verb.

4. Keywords
Include keywords – especially ones you haven’t included in the page title.

5. Avoid less known acronyms
Only use acronyms if they are very common, such as EU and NATO. SA (South Australia) is not a commonly understood acronym. It can be confused with South Africa or South America.

6. Non-alpha characters
Do not use quotes or any non-alpha characters (Google, cuts them out of the meta description).

Image credit: https://www.sandcrestseo.com

Collaborative problem solving, discovery-driven learning and integrated decision making

Framing my research with this question; what facilitates and supports users’ creative collaborative experience in virtual spaces? Linda Hill, management professor, discusses leadership using Pixar and Google as case studies. What’s the secret to unlocking the creativity hidden inside your daily work, and giving every great idea a chance?

The point I’d like to draw from this talk is about the collaborative process that these organisations share and as Hill puts it, “…creating the space to share and combine their talents and passions.”

Robert Hargrove, a CEO consultant, defines collaboration as “an act of shared creation and/or shared discovery: two or more individuals with complementary skills interacting to create a shared understanding that none had previously possessed or could have come to on their own. Collaboration creates a shared meaning about a process, a product, or an event.” [1]

I like this definition because it uses the words discovery and process. Creative collaboration is very much a process of discovery through discussion and debate of diverse ideas, testing and refining the ideas and reconfiguring them into new combinations.

This is highlighted by Hill breaking it down into three capabilities:

Creative abrasion

  • Marketplace of ideas through debate and discourse
  • Diversity and conflict
  • Collaborative problem solving

Creative agility

  • Test and refine the portfolio of ideas
  • Pursuit, reflection and adjustment
  • Discovery-driven learning

Creative resolution

  • Combine ideas to reconfigure them into new combinations
  • Inclusive and integrated decision making process

Going back to Hill’s point about leaders creating the space to share and combine talents and passions. My research explores how, through technology, virtual spaces could facilitate and support creative collaboration. Roy Ascott said “telematic interactivity offer extraordinary collaboration opportunities to artists,” [2] telematic interactivity being the virtual space created for collaboration.

My question is, what facilitates and supports the creative collaborative experience in interactive virtual spaces for multidisciplinary teams? Hill’s points on collaborative problem solving, discovery-driven learning and integrated decision making are areas that will contribute to framing this question.

Collaboration is the act of working together to produce a work and importantly, a process of discovery and learning. The internet is a tool for collaboration. Online collaboration offers opportunities to network, research and create relationships. It expands the realm of collaborative opportunities and vastly increases interaction with others. The focus of this research is understanding users’ experience when participating in collaborative scenarios to help better design collaborative virtual environments.

[1] Hargrove, R., Mastering the Art of Creative Collaboration, 1997
[2] Ascott, R., On Networking by Roy Ascott, 2011

Agile and human-centered design

What design challenges are we facing today in this rapidly changing environment? New and maturing technologies, such as artificial intelligence, virtual reality (VR) and augmented reality (AR) to list a few, presents us with the potential of new products once only conceivable in science fiction.

I’d like to summarise the first chapter of ‘The Design of Everyday Things’ by Don Norman in two parts; human-centred design and principles of interaction. Norman discusses the principles and characteristics of good design, which sits central to any design methodology. I’m very familiar with Agile as a design approach and if you are also, you will see where some of the Agile principles stems from within human-centred design.

Two of the most important characteristics of good design are discoverability and understanding.

  • Discoverability – is it possible to even figure out what actions are possible and where and how to perform them?
  • Understanding – what does it all mean? How is the product supposed to be used? What do all the different controls and settings means?

Modern devices offer complexity. As a designer, we strive to design products that remove this complexity and fulfil people’s needs while being understandable and usable. Discoverability and understanding are key characteristics of design, but to use these effectively designers need a deep understanding of the people who use the product.

Design presents a fascinating interplay of technology and psychology, that the designers must understand both.

Human-centered design

Each new development seems to repeat the mistakes of the earlier ones; each new field requires time before it, too, adopts the principles of good design… and each new invention of technology… requires experimentation and study before the principles of good design can be fully integrated into practice.

We are seeing this with VR and AR. The VR and AR fields are currently in an experimental phase, at the same time, designers are exploring what best practice could look like.

Human-centered design is a design philosophy, an ‘approach that puts human needs, capabilities and behavior first’ (Norman).

It starts with a good understanding of people and the needs that design intended to meet.

This is done through rapid tests of ideas, and after each test modifying the approach and the problem definition.

The rapid testing of ideas echoes the Agile approach to design. Agile is an iterative process of rapid development, testing and, based on the outcomes of the testing, modifying the design where required to help meet the needs of the user. In the next post I will discuss the principles of interaction. Understanding these principles will help us gain a better insight into the user.