From 53929c34dfa865e423c22ab876a99b8705a3c7ed Mon Sep 17 00:00:00 2001 From: Dalibor Nasevic Date: Tue, 22 Oct 2024 18:55:33 +0200 Subject: [PATCH] Redirect all pages to the new blog --- _includes/block/head.html | 3 +- _posts/2018-02-22-announcing-terminus.md | 2 +- ...-02-introducing-eslint-plugin-i18n-json.md | 1 + ...asset-system-for-react-and-react-native.md | 1 + ...ve-nodejs-client-for-the-kubernetes-api.md | 1 + ...-kubernetes-introduction-for-developers.md | 1 + ...5-08-moving-from-webdriver-to-puppeteer.md | 1 + _posts/2018-05-15-jiractl.md | 1 + _posts/2018-06-06-cicd-best-practices.md | 1 + _posts/2018-06-12-announcing-winston-3.md | 1 + ...8-06-19-jenkins-build-monitoring-plugin.md | 1 + _posts/2018-06-28-amazon-eks.md | 135 ++++++++-------- _posts/2018-07-10-react-native-wdio.md | 1 + ...-user-performance-measuring-for-next-js.md | 1 + _posts/2018-08-24-higher-order-reducers.md | 25 +-- _posts/2018-09-04-python-mocking.md | 3 +- _posts/2018-09-11-redis-ruby-bloom-filter.md | 1 + ...oap-apis-in-functional-tests-using-nock.md | 3 +- ...n-for-rails-apps-with-sidekiq-scheduler.md | 1 + _posts/2018-10-16-cypress-vs-selenium.md | 141 ++++++++-------- ...le-lighthouse-as-a-service-lighthouse4u.md | 7 +- ...reduxful-manage-restful-data-with-redux.md | 21 +-- _posts/2018-11-13-engaging-standups.md | 3 +- ...8-11-20-developer-view-oozie-vs-airflow.md | 45 +++--- _posts/2018-12-14-require-suffix.md | 3 +- _posts/2018-12-20-python-metaclasses.md | 3 +- _posts/2019-02-26-software-vpn-channel.md | 3 +- ...-03-06-dynamic-configuration-for-nodejs.md | 1 + _posts/2019-04-02-addhoc.md | 1 + _posts/2019-04-09-announcing-exemplar.md | 1 + .../2019-04-16-kubernetes-external-secrets.md | 1 + _posts/2019-04-25-domain-connect.md | 23 +-- ...9-05-22-testing-react-native-using-ekke.md | 1 + ...y-contribution-datetimepicker-component.md | 1 + ...5-asherah-opensource-app-encryption-sdk.md | 153 +++++++++--------- .../2019-07-16-domain-vertial-classifier.md | 37 ++--- ...lackbox-docker-an-experimental-approach.md | 11 +- _posts/2019-07-26-domain-name-valuation.md | 1 + ...2019-08-13-kubernetes-gated-deployments.md | 1 + _posts/2019-09-03-doh-concerns.md | 11 +- ...-PHP-malware-and-xor-encrypted-requests.md | 37 ++--- ...2019-11-19-frontend-caching-quick-start.md | 3 +- _posts/2019-11-26-making-frameworks.md | 1 + _posts/2019-12-03-is-my-host-fast-yet.md | 5 +- _posts/2019-12-05-securing-the-cloud.md | 1 + _posts/2019-12-10-Kernel-Bypass-Networking.md | 3 +- ...prediction-interval-with-neural-network.md | 25 +-- _posts/2020-01-27-b-root.md | 3 +- ...020-05-06-godaddy-splitio-collaboration.md | 1 + .../2020-05-12-experimentation-practices.md | 1 + _posts/2021-02-11-gasket-api-preset.md | 6 +- ...021-05-07-godaddys-journey-to-the-cloud.md | 35 ++-- ...07-serverless-aws-servicecatalog-plugin.md | 13 +- ...21-06-09-android-animated-pride-rainbow.md | 15 +- _posts/2021-06-14-test-harness.md | 1 + .../2021-07-07-radpack-your-dependencies.md | 1 + _posts/2021-08-26-tartufo.md | 1 + _posts/2021-09-29-godaddy-response-csam.md | 1 + ...2021-11-08-android-state-management-mvi.md | 49 +++--- _posts/2022-01-06-tartufo-v3.md | 1 + _posts/2022-01-10-running-puma-in-aws.md | 1 + ...28-raising-the-bar-for-devsecops-beyond.md | 1 + _posts/2022-03-22-fluent-bit-plugins-in-go.md | 1 + _posts/2022-05-27-study-group-framework.md | 1 + ...07-28-websites-and-marketing-case-study.md | 1 + _posts/2022-09-12-rails-bulk-insert-mysql.md | 1 + _posts/2022-09-19-sample-size-calculator.md | 1 + ...-aws-resources-using-globaltechregistry.md | 1 + ...-runaway-memory-usage-in-istio-sidecars.md | 1 + _posts/2022-10-31-optimized-hosting.md | 1 + _posts/2022-12-01-data-mesh.md | 1 + _posts/2022-12-15-search-data-engineering.md | 1 + ...2-03-mental-health-in-software-industry.md | 11 +- ...e-to-webview-bridge-with-rxjs-and-redux.md | 1 + _posts/2023-03-20-leveraging-ffis.md | 1 + _posts/2023-03-28-data-platform-evolution.md | 1 + ...-company-agility-and-scale-in-the-cloud.md | 3 +- ...ncryption-in-ruby-on-rails-with-asherah.md | 1 + _posts/2023-06-13-hosting-in-aws.md | 1 + _posts/2023-06-23-ceph-storage.md | 1 + ...023-08-07-lambda-rest-api-using-aws-cdk.md | 1 + ...4-open-source-summit-north-america-2023.md | 1 + ...2023-08-22-cpu-vulnerability-management.md | 1 + _posts/2023-09-05-cmdb.md | 1 + _posts/2023-09-28-aws-cdk-adoption.md | 1 + ...26-layered-architecture-for-a-data-lake.md | 1 + _posts/2023-11-16-api-gateway-at-godaddy.md | 7 +- _posts/2023-11-20-emr-serverless-on-arm64.md | 1 + .../2023-12-07-cloud-cost-management-aws.md | 1 + .../2023-12-12-authorization-oauth-openfga.md | 1 + 90 files changed, 496 insertions(+), 408 deletions(-) diff --git a/_includes/block/head.html b/_includes/block/head.html index c226231..7977126 100644 --- a/_includes/block/head.html +++ b/_includes/block/head.html @@ -31,9 +31,10 @@ {% feed_meta %} {% if page.canonical %} + {% else %} - + {% endif %} diff --git a/_posts/2018-02-22-announcing-terminus.md b/_posts/2018-02-22-announcing-terminus.md index eccb4bd..251883d 100644 --- a/_posts/2018-02-22-announcing-terminus.md +++ b/_posts/2018-02-22-announcing-terminus.md @@ -4,7 +4,7 @@ title: "Health Checks and Graceful Shutdown for Node.js Applications" date: 2018-02-22 11:16:01 -0800 cover: /assets/images/headers/graceful-shutdown.jpg excerpt: Your application is serving requests constantly for your users. You and your team want to ship features and fixes as soon as they are ready, so you do continuous delivery. But what happens to your users who used your product at the time of the deployment? Chances are, the requests they have in progress are going to fail. This post helps you fix that. -canonical: https://nemethgergely.com/nodejs-healthcheck-graceful-shutdown +canonical: https://godaddy.com/resources/news/announcing-terminus authors: - name: Gergely Nemeth url: https://twitter.com/nthgergo diff --git a/_posts/2018-04-02-introducing-eslint-plugin-i18n-json.md b/_posts/2018-04-02-introducing-eslint-plugin-i18n-json.md index 3b89441..17555f4 100644 --- a/_posts/2018-04-02-introducing-eslint-plugin-i18n-json.md +++ b/_posts/2018-04-02-introducing-eslint-plugin-i18n-json.md @@ -4,6 +4,7 @@ title: "Introducing a fully extendable eslint plugin for JSON i18n translation date: 2018-04-02 11:16:01 -0800 cover: /assets/images/headers/eslint-plugin-i18n-json.png excerpt: Many web apps harness internationalization through frameworks such as React-Intl. This is awesome for the web and helps web apps obtain a global reach. +canonical: https://godaddy.com/resources/news/introducing-eslint-plugin-i18n-json authors: - name: Mayank Jethva url: https://github.com/mayank23 diff --git a/_posts/2018-04-04-isomorphic-asset-system-for-react-and-react-native.md b/_posts/2018-04-04-isomorphic-asset-system-for-react-and-react-native.md index 9d785d8..ba17aa1 100644 --- a/_posts/2018-04-04-isomorphic-asset-system-for-react-and-react-native.md +++ b/_posts/2018-04-04-isomorphic-asset-system-for-react-and-react-native.md @@ -6,6 +6,7 @@ cover: /assets/images/headers/isomorphic-asset-system.png excerpt: Introducing Asset System a cross platform asset rendering system for React and React-Native using SVG's. options: - full-bleed-cover +canonical: https://godaddy.com/resources/news/isomorphic-asset-system-for-react-and-react-native authors: - name: Arnout Kazemier url: https://github.com/3rd-Eden diff --git a/_posts/2018-04-10-an-intuitive-nodejs-client-for-the-kubernetes-api.md b/_posts/2018-04-10-an-intuitive-nodejs-client-for-the-kubernetes-api.md index 60d068b..c237011 100644 --- a/_posts/2018-04-10-an-intuitive-nodejs-client-for-the-kubernetes-api.md +++ b/_posts/2018-04-10-an-intuitive-nodejs-client-for-the-kubernetes-api.md @@ -4,6 +4,7 @@ title: "An Intuitive Node.js Client for the Kubernetes API" date: 2018-04-10 4:16:01 -0800 cover: /assets/images/headers/kubernetes-client.jpg excerpt: This post explains the motivation for and design of kubernetes-client. We provide an short example on how to write your custom Kubernetes extentions using Node.js and kubernetes-client. +canonical: https://godaddy.com/resources/news/an-intuitive-nodejs-client-for-the-kubernetes-api authors: - name: Silas Boyd-Wickizer url: https://github.com/silasbw diff --git a/_posts/2018-05-02-kubernetes-introduction-for-developers.md b/_posts/2018-05-02-kubernetes-introduction-for-developers.md index 5730e00..bdcb342 100644 --- a/_posts/2018-05-02-kubernetes-introduction-for-developers.md +++ b/_posts/2018-05-02-kubernetes-introduction-for-developers.md @@ -4,6 +4,7 @@ title: "Kubernetes - A Practical Introduction for Application Developers" date: 2018-05-02 05:16:01 -0800 cover: /assets/images/headers/kubernetes-intro.jpg excerpt: A collection of resources / best practices that help you become a more productive developer working with Kubernetes. +canonical: https://godaddy.com/resources/news/kubernetes-introduction-for-developers authors: - name: Gergely Nemeth url: https://twitter.com/nthgergo diff --git a/_posts/2018-05-08-moving-from-webdriver-to-puppeteer.md b/_posts/2018-05-08-moving-from-webdriver-to-puppeteer.md index 8d65d02..a3c7f7e 100644 --- a/_posts/2018-05-08-moving-from-webdriver-to-puppeteer.md +++ b/_posts/2018-05-08-moving-from-webdriver-to-puppeteer.md @@ -4,6 +4,7 @@ title: "UI Testing: moving from WebdriverIO and Selenium to Puppeteer" date: 2018-05-07 7:16:01 -0800 cover: /assets/images/headers/puppet-theater.jpg excerpt: When our team was losing engineering hours to Selenium-related test flakiness, we switched to Puppeteer for some of our UI tests. Given our constraints, we found that Puppeteer had a better developer experience and that the similar syntaxes of the two frameworks made the switch easy. We recommend Puppeteer for projects that do not need cross-browser compatibility. +canonical: https://godaddy.com/resources/news/moving-from-webdriver-to-puppeteer authors: - name: Conor Fellin url: https://www.linkedin.com/in/conor-fellin-840ba354/ diff --git a/_posts/2018-05-15-jiractl.md b/_posts/2018-05-15-jiractl.md index 3e7cd38..7ec399c 100644 --- a/_posts/2018-05-15-jiractl.md +++ b/_posts/2018-05-15-jiractl.md @@ -4,6 +4,7 @@ title: "jiractl: A command-line tool for managing Jira" date: 2018-05-15 08:53:01 -0800 cover: /assets/images/headers/jiractl-cover.jpg excerpt: This post introduces jiractl, a command-line tool for managing Jira. We provide some instructions on how to set up and use jiractl. +canonical: https://godaddy.com/resources/news/jiractl authors: - name: Emma Lubin url: https://twitter.com/lubin_emma diff --git a/_posts/2018-06-06-cicd-best-practices.md b/_posts/2018-06-06-cicd-best-practices.md index fcf049b..c1006b2 100644 --- a/_posts/2018-06-06-cicd-best-practices.md +++ b/_posts/2018-06-06-cicd-best-practices.md @@ -4,6 +4,7 @@ title: "Jenkins Best Practices - Practical Continuous Deployment in the Real Wor date: 2018-06-05 08:53:01 -0800 cover: /assets/images/ninjenkins.png excerpt: This post describes how we use best practices for CICD pipelines using Jenkins. +canonical: https://godaddy.com/resources/news/cicd-best-practices authors: - name: Jeff Pearce url: https://www.linkedin.com/in/jeffpea/ diff --git a/_posts/2018-06-12-announcing-winston-3.md b/_posts/2018-06-12-announcing-winston-3.md index 4f96aa1..c76a6a1 100644 --- a/_posts/2018-06-12-announcing-winston-3.md +++ b/_posts/2018-06-12-announcing-winston-3.md @@ -4,6 +4,7 @@ title: "Announcing winston@3.0.0!" date: 2018-06-12 05:53:01 -0800 cover: /assets/images/typeset-cover.jpg excerpt: After several years the winston team is happy to announce the latest version – 3.0.0! Learn more about the latest version of the most popular logging library for Node.js along with what Node.js LTS means to maintainers of popular npm packages. +canonical: https://godaddy.com/resources/news/announcing-winston-3 authors: - name: Charlie Robbins url: https://www.github.com/indexzero diff --git a/_posts/2018-06-19-jenkins-build-monitoring-plugin.md b/_posts/2018-06-19-jenkins-build-monitoring-plugin.md index 51caec6..dd24d7c 100644 --- a/_posts/2018-06-19-jenkins-build-monitoring-plugin.md +++ b/_posts/2018-06-19-jenkins-build-monitoring-plugin.md @@ -4,6 +4,7 @@ title: "A build monitoring plugin for Jenkins" date: 2018-06-19 05:16:01 -0800 cover: /assets/images/jenkmagic.png excerpt: We recently built a plugin to automatically monitor the health of our Jenkins builds. This article talks about how and why the plugin was built, and describes how it works at a high level. +canonical: https://godaddy.com/resources/news/jenkins-build-monitoring-plugin authors: - name: Jeff Pearce url: https://www.linkedin.com/in/jeffpea/ diff --git a/_posts/2018-06-28-amazon-eks.md b/_posts/2018-06-28-amazon-eks.md index 351cde9..bcebdee 100644 --- a/_posts/2018-06-28-amazon-eks.md +++ b/_posts/2018-06-28-amazon-eks.md @@ -6,105 +6,106 @@ cover: /assets/images/ekscover.jpg options: - full-bleed-cover excerpt: GoDaddy's engineering teams need a robust solution for running container-based workloads. Amazon EKS gives us a shared responsibility service model that minimizes operational complexity and delivers the powerful benefits of running on Kubernetes. +canonical: https://godaddy.com/resources/news/amazon-eks authors: - name: Edward Abrams url: https://www.linkedin.com/in/zeroaltitude/ photo: /assets/images/eabrams.jpg --- -Imagine nearly 200 engineering teams, many of whom are looking for a solution to running container workloads in order to reduce -operational complexity, manage orchestration and scale horizontally on the fly. What happens when they don't have a common -solution at hand? By nature, engineers will seek out a solution, evaluate, and then begin solving problems. There are a number -of viable, useful and technically sound container runtime solutions out there. If these teams are operating independently, not -every team will choose the same solution. The downside to this is that each of these solutions can be complex to operate and each +Imagine nearly 200 engineering teams, many of whom are looking for a solution to running container workloads in order to reduce +operational complexity, manage orchestration and scale horizontally on the fly. What happens when they don't have a common +solution at hand? By nature, engineers will seek out a solution, evaluate, and then begin solving problems. There are a number +of viable, useful and technically sound container runtime solutions out there. If these teams are operating independently, not +every team will choose the same solution. The downside to this is that each of these solutions can be complex to operate and each has its own best practices. As teams grow, shrink, and shift, it becomes incrementally more difficult and expensive to keep -operating lots of different solutions over time, making it harder to combine efforts from various teams on particular projects, or -to fold projects together under a common team. The risks of siloing knowledge of best practices and accumulating technical debt +operating lots of different solutions over time, making it harder to combine efforts from various teams on particular projects, or +to fold projects together under a common team. The risks of siloing knowledge of best practices and accumulating technical debt surrounding bespoke, single-case solutions are high. A predictable and potentially viable way to solve this would be to attempt to break out our operation of [Kubernetes]( -https://kubernetes.io/), the engine we chose, into a platform team and offer it to other teams "as a service." This solves the -problem of the diversity of solutions in play and focuses the technical expertise of operating a complex environment on one team. +https://kubernetes.io/), the engine we chose, into a platform team and offer it to other teams "as a service." This solves the +problem of the diversity of solutions in play and focuses the technical expertise of operating a complex environment on one team. However, many problems remain. -Running an "as a service" container solution requires a solid definition of the platform's responsibility versus the product -teams' responsibility. In an environment such as Kubernetes, it can be challenging to debug and optimize an application. This is -because the problem space is split between the Kubernetes control plane (master nodes with their core context services, such as -kube-controller-manager) and the worker nodes where the application runs. So, for example, if a particular high volume service is -experiencing slow load times or intermittent interruptions, the solution might be to tune kube-controller-manager's qps and burst -settings or kube-apiserver's max-requests-inflight. But the problem could also be in the application's configuration or its pod -runtime settings. It can be very difficult to divide up the responsibilities for the cluster between a platform team and the -various application teams who are using it. It can be even harder to do so in a way that can scale across an organization or +Running an "as a service" container solution requires a solid definition of the platform's responsibility versus the product +teams' responsibility. In an environment such as Kubernetes, it can be challenging to debug and optimize an application. This is +because the problem space is split between the Kubernetes control plane (master nodes with their core context services, such as +kube-controller-manager) and the worker nodes where the application runs. So, for example, if a particular high volume service is +experiencing slow load times or intermittent interruptions, the solution might be to tune kube-controller-manager's qps and burst +settings or kube-apiserver's max-requests-inflight. But the problem could also be in the application's configuration or its pod +runtime settings. It can be very difficult to divide up the responsibilities for the cluster between a platform team and the +various application teams who are using it. It can be even harder to do so in a way that can scale across an organization or across the company. -Operating such a service also requires an integrated authentication and authorization mechanism so that access to resources is -appropriately gated. It requires an implementation of some kind of charge-back model so that the cost of the resources is shared -among the teams using the solution. It requires the members of that platform team to be able to handle the issues generated by -potentially hundreds of teams of engineers, to keep on top of the needs of those teams, to keep the compute infrastructure +Operating such a service also requires an integrated authentication and authorization mechanism so that access to resources is +appropriately gated. It requires an implementation of some kind of charge-back model so that the cost of the resources is shared +among the teams using the solution. It requires the members of that platform team to be able to handle the issues generated by +potentially hundreds of teams of engineers, to keep on top of the needs of those teams, to keep the compute infrastructure supporting those teams appropriately scaled, to make capital expenditures when the resource pool is deemed too thin, and more. A fully funded platform effort could address all of these issues. But what if operating "as a service" infrastructures that span from the resource and virtualization layer through the container runtime layer and all the way up to end-user applications and services isn't the main distinguishing feature of your business? What if you need your hundreds of teams of engineers to be focused on the -tooling and customer-facing products that differentiate your products from your competitors? In short, if your company's primary -mission isn't running these on-premises "as a service" solutions, it's difficult to justify the significant person time and +tooling and customer-facing products that differentiate your products from your competitors? In short, if your company's primary +mission isn't running these on-premises "as a service" solutions, it's difficult to justify the significant person time and investment that is needed to keep it running successfully for a large engineering organization. -Enter Amazon Web Services (AWS) and [Amazon Elastic Container Service for Kubernetes (EKS)](https://aws.amazon.com/eks). As a -foundation, AWS offers us the tools needed to track our expenditures, team by team, organization by organization, without having -to completely implement our own model for managing expense. They provide us a clearly articulated Shared Responsibility Model that -offloads many layers of operational responsibility to their own scaled-out support teams across all regions and services. In -addition, they let us scale out our resources as-needed and with an operational expense model rather than a capital expenditure +Enter Amazon Web Services (AWS) and [Amazon Elastic Container Service for Kubernetes (EKS)](https://aws.amazon.com/eks). As a +foundation, AWS offers us the tools needed to track our expenditures, team by team, organization by organization, without having +to completely implement our own model for managing expense. They provide us a clearly articulated Shared Responsibility Model that +offloads many layers of operational responsibility to their own scaled-out support teams across all regions and services. In +addition, they let us scale out our resources as-needed and with an operational expense model rather than a capital expenditure model which simplifies our process and lessens our risk of spending money where we don't need to. -Kubernetes is a flexible, sophisticated tool for running workloads in a common way regardless of whether they are running in our -on-premise infrastructure or on AWS. This gives us a simplified operational model for our software deployment and runtime -management and monitoring and at the same time simplifies migration to AWS from our existing infrastructure. EKS in particular -offers an enormous benefit to GoDaddy. Our engineers use Kubernetes in incredibly diverse ways. In particular, our usage divides -up into four primary use cases: +Kubernetes is a flexible, sophisticated tool for running workloads in a common way regardless of whether they are running in our +on-premise infrastructure or on AWS. This gives us a simplified operational model for our software deployment and runtime +management and monitoring and at the same time simplifies migration to AWS from our existing infrastructure. EKS in particular +offers an enormous benefit to GoDaddy. Our engineers use Kubernetes in incredibly diverse ways. In particular, our usage divides +up into four primary use cases: -1. Batch: event- or schedule-driven, finite duration jobs. An example is a security scan, which usually contains standard +1. Batch: event- or schedule-driven, finite duration jobs. An example is a security scan, which usually contains standard container model definitions and minimal operational complexity -2. Small services: one-to-ten pod deployments with one container each, normally a traditional +2. Small services: one-to-ten pod deployments with one container each, normally a traditional [LAMP](https://en.wikipedia.org/wiki/LAMP_(software_bundle))-style web service -3. Big services: hundreds of pods with multiple containers, usually representing an entire microservices architecture tied back to +3. Big services: hundreds of pods with multiple containers, usually representing an entire microservices architecture tied back to a GoDaddy product line, such as a commerce system -4. Massive, end-user containerized services: thousands-to-hundreds of thousands of pods per cluster, each with thousands of nodes +4. Massive, end-user containerized services: thousands-to-hundreds of thousands of pods per cluster, each with thousands of nodes and multiple containers per pod in a sophisticated end-user architecture -As an example of the last, Managed WordPress 2.0 (MWP2) was developed on Kubernetes to produce managed, highly available WordPress +As an example of the last, Managed WordPress 2.0 (MWP2) was developed on Kubernetes to produce managed, highly available WordPress sites that offer our users fantastic performance, scalability and flexibility. We solved problems with traditional shared hosting -implementations of WordPress by using containers and taking advantage of the overlay filesystem, giving flexibility to site owners -to be on the versions of PHP and WordPress they want to be on. We used Kubernetes' horizontal scaling capabilities and made good +implementations of WordPress by using containers and taking advantage of the overlay filesystem, giving flexibility to site owners +to be on the versions of PHP and WordPress they want to be on. We used Kubernetes' horizontal scaling capabilities and made good use of cachine to keep the performance of the WordPress system as high as possible. -But this is only one of many ways we are using this powerful container runtime and orchestration platform. We are also running -sophisticated CICD pipelines, automating security scans, deploying and managing both internally and externally facing proxies and -operating and scaling core customer services this way. Our Presence and Commerce systems, aftermarket DNS sales and core data +But this is only one of many ways we are using this powerful container runtime and orchestration platform. We are also running +sophisticated CICD pipelines, automating security scans, deploying and managing both internally and externally facing proxies and +operating and scaling core customer services this way. Our Presence and Commerce systems, aftermarket DNS sales and core data application infrastructure are tested, built, deployed and scaled using Kubernetes. -Our four primary use cases for container workloads demand many different configurations, management, and scaling requirements on -the Kubernetes clusters. EKS takes the operational complexity out of managing these clusters. First, it manages the scaling and -coordination of the control plane’s core infrastructure, eliminating the need for GoDaddy to administer Kubernetes master nodes. -Second, because of the deep integration of EKS with other AWS services, GoDaddy can leverage massive benefits from [Elastic Load +Our four primary use cases for container workloads demand many different configurations, management, and scaling requirements on +the Kubernetes clusters. EKS takes the operational complexity out of managing these clusters. First, it manages the scaling and +coordination of the control plane’s core infrastructure, eliminating the need for GoDaddy to administer Kubernetes master nodes. +Second, because of the deep integration of EKS with other AWS services, GoDaddy can leverage massive benefits from [Elastic Load Balancers](https://aws.amazon.com/elasticloadbalancing/), [auto-scaling](https://aws.amazon.com/autoscaling/), [AWS CloudTrail]( -https://aws.amazon.com/cloudtrail/) logging, [AWS Cloudwatch](https://aws.amazon.com/cloudwatch/) monitoring, and event-driven -programming using [AWS Lambda](https://aws.amazon.com/lambda/). Third, we can manage the Kubernetes worker infrastructure using -GoDaddy’s best practices through [AWS CloudFormation](https://aws.amazon.com/cloudformation/). AWS CloudFormation enables GoDaddy -to define and deploy infrastructure as code, which is then used in conjunction with +https://aws.amazon.com/cloudtrail/) logging, [AWS Cloudwatch](https://aws.amazon.com/cloudwatch/) monitoring, and event-driven +programming using [AWS Lambda](https://aws.amazon.com/lambda/). Third, we can manage the Kubernetes worker infrastructure using +GoDaddy’s best practices through [AWS CloudFormation](https://aws.amazon.com/cloudformation/). AWS CloudFormation enables GoDaddy +to define and deploy infrastructure as code, which is then used in conjunction with [AWS Service Catalog](https://aws.amazon.com/servicecatalog/) to provide governance and best practices. -Getting started with EKS is simple. Before you begin, you should have kubectl and the Heptio Authenticator to allow IAM -authentication for your Kubernetes cluster installed. In addition, you should have the AWS CLI installed and set up with -credentials to access your account. Full instructions on setting up these prerequisites can be found in the [Getting Started +Getting started with EKS is simple. Before you begin, you should have kubectl and the Heptio Authenticator to allow IAM +authentication for your Kubernetes cluster installed. In addition, you should have the AWS CLI installed and set up with +credentials to access your account. Full instructions on setting up these prerequisites can be found in the [Getting Started guide](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html). There are two basic tasks involved in getting your AWS Kubernetes cluster up and running: -1. Get your cluster up: this is the control plane, including the Kubernetes masters and services, and +1. Get your cluster up: this is the control plane, including the Kubernetes masters and services, and 2. Get your worker nodes up: this includes the EC2 instances you'll be running your pods on To get your cluster going, you need nothing more than a VPC, -an [EKS role to associate with the cluster](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html), subnets +an [EKS role to associate with the cluster](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html), subnets selected from the VPC for your workers, and security groups for them to run under. Using the [AWS CLI](https://docs.aws.amazon.com/cli/latest/reference/eks/index.html): ``` @@ -125,7 +126,7 @@ aws cloudformation create-stack \ This option will create a new VPC with new subnets and security groups and launch the cluster there. Note that the pricing of the EKS control plane is $0.20/hour, so if you're experimenting on a budget, be sure to keep track of how long your cluster is running. -The second set of things you'll need for your cluster are [the worker nodes](https://docs.aws.amazon.com/eks/latest/userguide/launch-workers.html). Here, the current recommendation is to use another canonical AWS-developed CloudFormation stack definition +The second set of things you'll need for your cluster are [the worker nodes](https://docs.aws.amazon.com/eks/latest/userguide/launch-workers.html). Here, the current recommendation is to use another canonical AWS-developed CloudFormation stack definition to get going quickly. ``` @@ -152,15 +153,15 @@ can find complete instructions in the [Getting Started guide](https://docs.aws.a That's all there is to it. Within a few minutes, you can have a Kubernetes cluster integrated into your AWS account and ready for your workloads. -Our engineering teams embrace the DevOps model, where they own the process of developing, operating and monitoring the -infrastructure for their products in the same way that they develop their customer-facing applications. Using AWS Service Catalog -as a product portfolio manager, GoDaddy can define standard settings for products with AWS CloudFormation definitions, enabling -teams to iterate according to their own Service definitions within the boundaries that we determine to be both secure and -performant. This approach lets GoDaddy apply centralized governance and help teams across the company, while still maintaining a +Our engineering teams embrace the DevOps model, where they own the process of developing, operating and monitoring the +infrastructure for their products in the same way that they develop their customer-facing applications. Using AWS Service Catalog +as a product portfolio manager, GoDaddy can define standard settings for products with AWS CloudFormation definitions, enabling +teams to iterate according to their own Service definitions within the boundaries that we determine to be both secure and +performant. This approach lets GoDaddy apply centralized governance and help teams across the company, while still maintaining a path for radiating best practices and new knowledge out to the company. -Because GoDaddy hosts millions of domain names, websites and web services, we need an environment that scales to our needs while -maintaining operational efficiency and minimizing complexity. EKS offers us an industry standard container runtime and -orchestration engine that enables a clear path for migrating workloads from our on-premises infrastructure to AWS. It helps us -simplify how we engineer and lets us focus on our ability to offer differentiated and delightful experiences for our customers, -who look to GoDaddy to provide them the platform for creating and managing their independent ventures. +Because GoDaddy hosts millions of domain names, websites and web services, we need an environment that scales to our needs while +maintaining operational efficiency and minimizing complexity. EKS offers us an industry standard container runtime and +orchestration engine that enables a clear path for migrating workloads from our on-premises infrastructure to AWS. It helps us +simplify how we engineer and lets us focus on our ability to offer differentiated and delightful experiences for our customers, +who look to GoDaddy to provide them the platform for creating and managing their independent ventures. diff --git a/_posts/2018-07-10-react-native-wdio.md b/_posts/2018-07-10-react-native-wdio.md index 778e0f3..6150123 100644 --- a/_posts/2018-07-10-react-native-wdio.md +++ b/_posts/2018-07-10-react-native-wdio.md @@ -4,6 +4,7 @@ title: "React Native Application UI testing using WebDriverIO and Appium" date: 2018-07-09 12:00:00 -0800 cover: /assets/images/react-native-wdio/cover-wdio.png excerpt: We recently adopted WebDriverIO based UI testing for our React Native application. Benefits of using WebDriverIO include allowing us to write UI tests just as we wrote tests for the web. WebDriverIO configuration allows us to plugin Sauce Labs Emulators or Real Devices for cloud-based testing. +canonical: https://godaddy.com/resources/news/react-native-wdio authors: - name: Raja Panidepu url: https://www.linkedin.com/in/rpanidepu/ diff --git a/_posts/2018-08-15-real-user-performance-measuring-for-next-js.md b/_posts/2018-08-15-real-user-performance-measuring-for-next-js.md index fc35ff4..b156d08 100644 --- a/_posts/2018-08-15-real-user-performance-measuring-for-next-js.md +++ b/_posts/2018-08-15-real-user-performance-measuring-for-next-js.md @@ -6,6 +6,7 @@ cover: /assets/images/headers/next-rum.jpg excerpt: With the introduction of navigation timing in browsers it has become a lot easier to measure performance of your front-end application. With the introduction of the `next-rum` component you will be able to gather the same metrics for your Next.js based application as well. options: - full-bleed-cover +canonical: https://godaddy.com/resources/news/real-user-performance-measuring-for-next-js authors: - name: Arnout Kazemier url: https://github.com/3rd-Eden diff --git a/_posts/2018-08-24-higher-order-reducers.md b/_posts/2018-08-24-higher-order-reducers.md index 3ea4654..55f5c50 100644 --- a/_posts/2018-08-24-higher-order-reducers.md +++ b/_posts/2018-08-24-higher-order-reducers.md @@ -4,6 +4,7 @@ title: "Eliminating Boilerplate and Increasing Reusability with Higher-Order Red date: 2018-08-24 12:00:00 -0800 cover: /assets/images/redux-logo.png excerpt: My team has changed the way we write our redux reducers, choosing a more dynamic approach than the common switch statement. Creating reducers with higher-order factory functions gives us some great benefits. They can make the process of writing reducers faster and they're also flexible functions that can be used to generalize patterns and reduce repetition. +canonical: https://godaddy.com/resources/news/higher-order-reducers authors: - name: Bill Heberer url: https://github.com/bheberer @@ -14,7 +15,7 @@ authors: During my internship at GoDaddy, I've had the opportunity to work on the Account Homepage team, a Front-End centric team working on GoDaddy's new experience for logged in users. My team uses [Redux](https://redux.js.org/) to manage the state of this app. -Most complaints about Redux are related to boilerplate code and verboseness. These complaints are well-founded, as Redux was intended to make state changes obvious, not concise. In smaller apps, this kind of code isn't as much of a problem, but it becomes a significant time sink in large-scale apps. +Most complaints about Redux are related to boilerplate code and verboseness. These complaints are well-founded, as Redux was intended to make state changes obvious, not concise. In smaller apps, this kind of code isn't as much of a problem, but it becomes a significant time sink in large-scale apps. Reducer functions were a pain point for my team. We used switch statements to write our reducers, which amounts to a lot of boilerplate. This boilerplate added up as our project progressed and we continued to add actions, so we ended up with some pretty large functions. These functions were cumbersome and often repetitive, so we decided to forego this static way of writing reducers for a more dynamic approach using higher-order reducers. @@ -96,7 +97,7 @@ const reducer = createReducer({}, { ...action.payload }), ['DATA_FETCHED']: (state, action) => ({ - ...state, + ...state, ...action.payload }), ['FETCH_ERROR']: (state, action) => ({ @@ -120,9 +121,9 @@ const createReducer = (initialState, defaultHandler, actionTypes) => Now ```createReducer``` takes in the initial state, an array full of potential action types and a default handler function, which will be called by each action type in the reducer. We use an array here for concision and because it's actually faster to use the array includes method than using a lookup table for smaller sample sizes. Using a ```defaultHandler``` makes adding an action type to a reducer is incredibly fast, all you have to do is add the new action type into your ```actionTypes``` parameter. The time needed to create the reducer has gone down, and the repetition has been elimated as well. Here's what our reducer looks like now. ```js -const updateState = (state, action) => ({ - ...state, - ...action.payload +const updateState = (state, action) => ({ + ...state, + ...action.payload }) const reducer = createReducer({}, updateState, [ @@ -152,14 +153,14 @@ This version of ```createReducer``` takes in an extra parameter. This customHand Let's recreate our fetching reducer using this new function. Instead of having all the actions follow the same pattern, we're going to have the ```DATA_FETCHED``` action add the data we've fetched to the end of an array. ```js -const updateState = (state, action) => ({ - ...state, - ...action.payload +const updateState = (state, action) => ({ + ...state, + ...action.payload }) -const addToArray = (state, action) => ({ +const addToArray = (state, action) => ({ ...state, - data: [...state.data, ...action.payload] + data: [...state.data, ...action.payload] }) const reducer = createReducer({}, updateState, [ @@ -175,7 +176,7 @@ So now we have a reducer that handles the first two actions with our default han To keep repetition at a minimum when using small reducers, you need to be aware of certian patterns that can arise. For example we've been writing a reducer for fetching some data for this whole article. Fetching data is a pretty common pattern, and we usually have three actions to handle fetching. An action to tell us we're fetching the data, an action to tell us we've fetched the data and an action to tell us that the fetching has failed. If you can identify these kind of patterns in your code, you can use the ```createReducer``` function to generalize them into reusable reducer functions. -As an example, let's create a reusable fetching reducer. +As an example, let's create a reusable fetching reducer. ```js /* Without defaultHandler */ @@ -240,7 +241,7 @@ With our new, specialized ```fetchingReducer```, we were able to create slices o ### Conclusion -* Default handlers can make the addition of new action types to your reducers trivial if they follow the correct pattern. +* Default handlers can make the addition of new action types to your reducers trivial if they follow the correct pattern. * Higher-order reducers decrease the amount of code that you have to write, leading to fewer small mistakes being made and a less tedious experience. * Higher-order reducers can be used to eliminate repetition amongst reducers with the use of specialized reducer functions like our `fetchingReducer` example. * These functions make it possible to have concise code while maintaining the predictable quality of Redux. diff --git a/_posts/2018-09-04-python-mocking.md b/_posts/2018-09-04-python-mocking.md index 55badc0..d92ff5c 100644 --- a/_posts/2018-09-04-python-mocking.md +++ b/_posts/2018-09-04-python-mocking.md @@ -4,6 +4,7 @@ title: "Making mocking mistakes in Python" date: 2018-09-04 12:00:00 -0800 cover: /assets/images/python-kitten.jpg excerpt: Python mocking is tricky. See if you can diagnose and correct four example mocking mistakes, all of which I've made while learning the mock library in the past few months. +canonical: https://godaddy.com/resources/news/python-mocking authors: - name: Raphey Holmes url: https://github.com/raphey @@ -111,7 +112,7 @@ class TestIsCatPerson(unittest.TestCase): 'meyers_briggs_type': 'INTJ', 'likes_laser_pointers': True, 'dresses_like_a_cat': True, - 'validation_id': 'h19d8w22' + 'validation_id': 'h19d8w22' } self.assertTrue(is_cat_person('path/to/person')) ``` diff --git a/_posts/2018-09-11-redis-ruby-bloom-filter.md b/_posts/2018-09-11-redis-ruby-bloom-filter.md index 2d4b787..dcbc314 100644 --- a/_posts/2018-09-11-redis-ruby-bloom-filter.md +++ b/_posts/2018-09-11-redis-ruby-bloom-filter.md @@ -6,6 +6,7 @@ cover: /assets/images/bloom_filter.png excerpt: In our email marketing products, we changed our bloom filter implementation by using a custom Redis and an in-memory bloom filter written in Ruby. We will go through iterations at solving a real problem and writing a custom bloom filter from scratch. options: - full-bleed-cover +canonical: https://godaddy.com/resources/news/redis-ruby-bloom-filter authors: - name: Dalibor Nasevic title: Sr. Principal Software Engineer diff --git a/_posts/2018-10-02-mocking-soap-apis-in-functional-tests-using-nock.md b/_posts/2018-10-02-mocking-soap-apis-in-functional-tests-using-nock.md index f195ce5..e926639 100644 --- a/_posts/2018-10-02-mocking-soap-apis-in-functional-tests-using-nock.md +++ b/_posts/2018-10-02-mocking-soap-apis-in-functional-tests-using-nock.md @@ -7,6 +7,7 @@ excerpt: This post explains why and how to mock the external REST and SOAP APIs in the functional tests of a service written in NodeJS to have a more robust CICD. In this post, We will write a simple functional tests and mock the external API using `nock` node module. +canonical: https://godaddy.com/resources/news/mocking-soap-apis-in-functional-tests-using-nock authors: - name: Deepti Agrawal url: https://www.linkedin.com/in/adeepti10/ @@ -59,7 +60,7 @@ npm i --save-dev nock Assume the service under development has an endpoint `/user` which returns `fullname` derived from the response of the dependent service, the test snippet -would be: +would be: ```js const nock = require('nock'); diff --git a/_posts/2018-10-15-distributed-cron-for-rails-apps-with-sidekiq-scheduler.md b/_posts/2018-10-15-distributed-cron-for-rails-apps-with-sidekiq-scheduler.md index de60c7c..bf6f6d3 100644 --- a/_posts/2018-10-15-distributed-cron-for-rails-apps-with-sidekiq-scheduler.md +++ b/_posts/2018-10-15-distributed-cron-for-rails-apps-with-sidekiq-scheduler.md @@ -6,6 +6,7 @@ cover: /assets/images/sidekiq_scheduler.png excerpt: In some of our Ruby on Rails applications, we have migrated from using OS based cron to distributed cron using Sidekiq Scheduler. We will discuss the motivation for this change and the benefits from it. options: - full-bleed-cover +canonical: https://godaddy.com/resources/news/distributed-cron-for-rails-apps-with-sidekiq-scheduler authors: - name: Dalibor Nasevic title: Sr. Principal Software Engineer diff --git a/_posts/2018-10-16-cypress-vs-selenium.md b/_posts/2018-10-16-cypress-vs-selenium.md index d33506a..68b94e5 100644 --- a/_posts/2018-10-16-cypress-vs-selenium.md +++ b/_posts/2018-10-16-cypress-vs-selenium.md @@ -4,6 +4,7 @@ title: "Move over Selenium - UI Testing with Cypress" date: 2018-11-06 02:00:00 -0800 cover: /assets/images/cypress/cypress-at-godaddy.png excerpt: Cypress is a relatively new front end testing tool that can be used for your UI testing needs. Selenium brings some challenges to UI testing that Cypress aims to solve through a better developer experience. +canonical: https://godaddy.com/resources/news/cypress-vs-selenium authors: - name: Pablo Velasquez url: https://github.com/newpablo @@ -36,43 +37,43 @@ We opted to write extensions for what we needed and build out our own framework ## Challenges with Selenium, Getting Everyone Writing UI Tests -> The overarching goal is to have the entire team write UI Tests. +> The overarching goal is to have the entire team write UI Tests. -Typically you'll have an Sdet (engineer specializing in quality) along with developers building features. Our goal was to have all developers write UI tests not just the sdet. +Typically you'll have an Sdet (engineer specializing in quality) along with developers building features. Our goal was to have all developers write UI tests not just the sdet. -We focused on getting more developers to maintain and write tests. They would do it if forced but it wasn't something they found value in. We spent most of our time figuring out sleeps/waits and even when something was easy it took some effort to find why the test broke. +We focused on getting more developers to maintain and write tests. They would do it if forced but it wasn't something they found value in. We spent most of our time figuring out sleeps/waits and even when something was easy it took some effort to find why the test broke. -We struggled to get wide adoption in teams for writing UI tests. Therefore scaling with more developers was not going to work. +We struggled to get wide adoption in teams for writing UI tests. Therefore scaling with more developers was not going to work. -Even with all of the extensions, improvements and wrappers we built into our custom framework developer interest remains low and maintenance remains high. +Even with all of the extensions, improvements and wrappers we built into our custom framework developer interest remains low and maintenance remains high. ![Screenshot showing Selenium Test After Run]({{site.baseurl}}/assets/images/cypress/selenium-test.png) -Some of the problems we encountered: +Some of the problems we encountered: * Setup * We found narrowing down to the right choice took a great deal of effort due to the plethora of technologies to evaluate. - * There are dozens of js frameworks and combinations of runners. Contrasted with Cypress which gets you started immediately. - * It takes considerable effort to research and get started since you want to get a good fit for your team. - * We built tooling around launching UI tests but it proved difficult to run locally. + * There are dozens of js frameworks and combinations of runners. Contrasted with Cypress which gets you started immediately. + * It takes considerable effort to research and get started since you want to get a good fit for your team. + * We built tooling around launching UI tests but it proved difficult to run locally. * Learning Curve * We found it took considerable time to become proficient writing and debugging tests. * You launch your test and watch it run while looking at console output. It takes time to train yourself to understand the output at the speed the test runs. * Since you can't visually look at every step slowly side-by-side with console output it requires you to re-run multiple times to catch why it broke. * Finding and clicking on elements * The classic problem facing any UI test is the dreaded sleeps/timeouts (flakiness) - * Sometimes flakiness in tests comes from those waits for transitions or waiting for the dom to load an element. You attempt to send a click to specific coordinates, however, since you are sending a command through a driver to the browser, it could be something changed in the browser, i.e., the element moved location. - * The architecture is based on a command -> driver -> browser - * This architecture can result in some issues like the find element described above. + * Sometimes flakiness in tests comes from those waits for transitions or waiting for the dom to load an element. You attempt to send a click to specific coordinates, however, since you are sending a command through a driver to the browser, it could be something changed in the browser, i.e., the element moved location. + * The architecture is based on a command -> driver -> browser + * This architecture can result in some issues like the find element described above. * As noted you write your test and tell the driver what you want. The driver tells the browser what you want. Then the loop completes and you get a response. The commands don't run in the browser so they don't have access to any browser information to help your test respond if anything changes. - * Please note you can exec js with Selenium, but the comparison is in architectures, overall. + * Please note you can exec js with Selenium, but the comparison is in architectures, overall. ## Solutions with Cypress Our team piloted a project using Cypress to see if we could overcome some of the challenges mentioned above. The goal - of this post isn't to convince you not to use Selenium but to describe some of the things we found useful with - Cypress that may help overcome some objections you might receive in trying to scale and strengthen your UI testing. + of this post isn't to convince you not to use Selenium but to describe some of the things we found useful with + Cypress that may help overcome some objections you might receive in trying to scale and strengthen your UI testing. Cypress provides detailed [guides](https://docs.cypress.io/guides/overview/why-cypress.html) to get started but we'll highlight a few steps below to help summarize. @@ -85,8 +86,8 @@ Cypress can be easily installed with npm. Create a directory for your Cypress so $ npm install cypress --save-dev ``` -Everything you need to start writing tests with Cypress will be installed for you within seconds. Once the -installation has finished, open Cypress (note: you will use npx since the Cypress node module has been installed +Everything you need to start writing tests with Cypress will be installed for you within seconds. Once the +installation has finished, open Cypress (note: you will use npx since the Cypress node module has been installed within the current directory). ```console @@ -98,20 +99,20 @@ The Cypress Test Runner will load with a pre-loaded set of tests which run again ![Screenshot showing Cypress Test Runner Start]({{site.baseurl}}/assets/images/cypress/ide.png) ### Clear Documentation - -In addition to the detailed guides provided on the Cypress website, Cypress provides a search capability on their + +In addition to the detailed guides provided on the Cypress website, Cypress provides a search capability on their documentation site helping to find answers quicker. ![Screenshot showing Cypress Search]({{site.baseurl}}/assets/images/cypress/search.png) Similar to Selenium, Cypress is also [open source](https://github.com/cypress-io/cypress) which has allowed us to look - at their code to find how it works and provided insight into issues others have run into providing potential - workarounds until the issue can be resolved properly. What sets it apart from Selenium is that all of the source + at their code to find how it works and provided insight into issues others have run into providing potential + workarounds until the issue can be resolved properly. What sets it apart from Selenium is that all of the source code you need is in one place. There are no other drivers or tools in other repos you may need to go hunt for. -A dedicated Cypress room on [Gitter](https://gitter.im/cypress-io/cypress) has proved valuable to find information as -well. Cypress team members actively respond to questions there and the search functionality provides history of past -questions and answers. There are several Selenium resources on Gitter as well but the abundance of rooms can make it +A dedicated Cypress room on [Gitter](https://gitter.im/cypress-io/cypress) has proved valuable to find information as +well. Cypress team members actively respond to questions there and the search functionality provides history of past +questions and answers. There are several Selenium resources on Gitter as well but the abundance of rooms can make it noisier to find the right answers. ### Simple methods @@ -121,73 +122,73 @@ Consider the following code (taken from [cypress.io](https://docs.cypress.io/gui ```javascript describe('Post Resource', function() { it('Creating a New Post', function() { - cy.visit('/posts/new') + cy.visit('/posts/new') - cy.get('input.post-title') - .type('My First Post') + cy.get('input.post-title') + .type('My First Post') - cy.get('input.post-body') - .type('Hello, world!') + cy.get('input.post-body') + .type('Hello, world!') - cy.contains('Submit') - .click() + cy.contains('Submit') + .click() - cy.url() + cy.url() .should('include', '/posts/my-first-post') - cy.get('h1') + cy.get('h1') .should('contain', 'My First Post') }) }) ``` -Notice how easy and simple this code is to understand?!?! The time it takes for someone to become familiar with how +Notice how easy and simple this code is to understand?!?! The time it takes for someone to become familiar with how to write Cypress tests is minimal. The learning curve is drastically reduced by: * Simple commands like `.visit()`, `.get()` and `.click()` * No additional overhead to determine if a selector is a `id` or `class` since Cypress uses jQuery to get elements * Test framework out of the box - no need to include additional testing packages * Chaining of commands allowing each command to yield a subject to the next command similar to Promises - although not an exact 1:1 implementation. Commands cannot be run in parallel, cannot be forgot to be returned and cannot use a ` - .catch()` error handler for a failed command. This ensures tests are deterministic, repeatable and consistent for a + .catch()` error handler for a failed command. This ensures tests are deterministic, repeatable and consistent for a flake free user experience. ### Finding Elements and Debugging Tests -One of the more impressive features of Cypress is the Test Runner. Inside the Test Runner, Cypress offers a Selector +One of the more impressive features of Cypress is the Test Runner. Inside the Test Runner, Cypress offers a Selector Playground that can be used to generate selectors for your tests. -![Screenshot showing Element Selector]({{site.baseurl}}/assets/images/cypress/elementselector.png) +![Screenshot showing Element Selector]({{site.baseurl}}/assets/images/cypress/elementselector.png) -Gone are the days of inspecting elements or hunting through page source to generate a selector. Cypress defines a -strategy of finding the best unique selector and provides the command needed within your test code. In the above -example, Cypress has determined the best selector for the 'Add to this page' button is `.pivot-list > .btn`. The -strategy for selecting elements is customizable. The Selector Playground will also let you free-form type selectors -and show you how many elements match that selector so you can have confidence knowing you've created a unique +Gone are the days of inspecting elements or hunting through page source to generate a selector. Cypress defines a +strategy of finding the best unique selector and provides the command needed within your test code. In the above +example, Cypress has determined the best selector for the 'Add to this page' button is `.pivot-list > .btn`. The +strategy for selecting elements is customizable. The Selector Playground will also let you free-form type selectors +and show you how many elements match that selector so you can have confidence knowing you've created a unique selector for your element. Another feature of the Test Runner is the Command Log which details every step of the test. ![Screenshot showing Cypress Test Runner Running]({{site.baseurl}}/assets/images/cypress/testrunner.png) -On the left side a list of commands will show exactly what request was made making it easy to debug when problems -arise. On the GoDaddy GoCentral team, we use a testing environment to verify new features before deploying to our -production environment where customers interact with our site. The testing environment has many dependencies on -services maintained by teams throughout the company and sometimes one of those services becomes unavailable. In the -example below you can see a call to one of our APIs that is returning a 404 response. This allows us to debug our +On the left side a list of commands will show exactly what request was made making it easy to debug when problems +arise. On the GoDaddy GoCentral team, we use a testing environment to verify new features before deploying to our +production environment where customers interact with our site. The testing environment has many dependencies on +services maintained by teams throughout the company and sometimes one of those services becomes unavailable. In the +example below you can see a call to one of our APIs that is returning a 404 response. This allows us to debug our test and inspect the request and response made to determine if our test is working properly. ![Screenshot showing Cypress debugging]({{site.baseurl}}/assets/images/cypress/debugging.png) ### Mocking Flaky APIs -As mentioned in the previous section, flaky or slow APIs can drag down the efficiency of UI testing. When a service +As mentioned in the previous section, flaky or slow APIs can drag down the efficiency of UI testing. When a service doesn't return as expected, it's hard to verify UI functionality. Cypress introduces mocking within your test code to - account for this scenario allowing you to have more resilient UI tests. - -One instance where we use this on the GoDaddy GoCentral team is when calls are made to our billing API. We have a + account for this scenario allowing you to have more resilient UI tests. + +One instance where we use this on the GoDaddy GoCentral team is when calls are made to our billing API. We have a potential race condition when making calls to our billing API due to the fast nature of Cypress tests. -To avoid this race condition, we can simulate the call to the billing API using the `.route()` method Cypress provides +To avoid this race condition, we can simulate the call to the billing API using the `.route()` method Cypress provides as shown below. ```javascript @@ -200,19 +201,19 @@ cy.route({ enabled: true } }); -``` +``` In the above code, we capture any requests that match the url provided and return a 204 response with a response body. -This helps avoid any issue that may occur with the service being called and potentially speeds up the test by +This helps avoid any issue that may occur with the service being called and potentially speeds up the test by avoiding making the actual call to the service. We can also guarantee that our test should never fail because of this - race condition. We also simulated the JSON response received from the endpoint. This can be useful when wanting to - test various responses without having to setup test data before each test. - + race condition. We also simulated the JSON response received from the endpoint. This can be useful when wanting to + test various responses without having to setup test data before each test. + ![Screenshot showing 204 mocking]({{site.baseurl}}/assets/images/cypress/204mocking.png) - + Another useful example of mocking responses is to verify UI functionality when things go bad. With Cypress, it's easy to simulate what an error might look like to a customer when a service outage occurs. - + ```javascript cy.server(); cy.route({ @@ -223,31 +224,31 @@ cy.route({ }); ``` -With the above code, we can simulate our endpoint returning a 500 response to verify the customer sees the +With the above code, we can simulate our endpoint returning a 500 response to verify the customer sees the appropriate error message on their screen ![Screenshot showing 500 mocking]({{site.baseurl}}/assets/images/cypress/500mocking.png) ## Best Practices (or what we've learned so far) -* There's a knee jerk reaction to blame the test framework (in this case Cypress) for your test failures. 99.9% of -the time, the issue isn't with Cypress - it's with your code or the test environment being used. Double check you're +* There's a knee jerk reaction to blame the test framework (in this case Cypress) for your test failures. 99.9% of +the time, the issue isn't with Cypress - it's with your code or the test environment being used. Double check you're approaching your test case the best way. -* Set baseUrl in cypress.json - There are lots of useful things you can configure in your cypress.json file but the +* Set baseUrl in cypress.json - There are lots of useful things you can configure in your cypress.json file but the most important is to use a baseUrl. Without it, Cypress does not know the url of the app you plan to test. This opens - a browser on localhost with a random port. When you finally use `cy.visit()` it will look like your tests are + a browser on localhost with a random port. When you finally use `cy.visit()` it will look like your tests are reloading. It also will rerun any commands issued (in our case, shopper setup) all over again. Use baseUrl to avoid this. * Use separate spec files for your tests. This is especially useful when running tests in parallel or trying to retry tests. -* As of the time of this writing, Cypress does not have a retry capability. The functionality appears to be in -[development](https://github.com/cypress-io/cypress/issues/1313) and may be released soon. In the mean time, use a -retry [script](https://gist.github.com/Bkucera/4ffd05f67034176a00518df251e19f58#file-cypress-retries-js-L14) +* As of the time of this writing, Cypress does not have a retry capability. The functionality appears to be in +[development](https://github.com/cypress-io/cypress/issues/1313) and may be released soon. In the mean time, use a +retry [script](https://gist.github.com/Bkucera/4ffd05f67034176a00518df251e19f58#file-cypress-retries-js-L14) developed by another Cypress user. It's fantastic and supports running tests in parallel as well. * If you're trying to mock a large response object, Cypress doesn't handle this well. It's a known [issue](https://github.com/cypress-io/cypress/issues/76) and a lot of clever Cypress users have found workarounds. -* Lots of things are configurable in Cypress. If you don't like the default behavior you can most likely find a way -to change it through the documentation. Things like network requests getting whitelisted, element selection strategy -and default timeouts are all examples of things that can be changed. Side note on timeouts - Cypress does a good job +* Lots of things are configurable in Cypress. If you don't like the default behavior you can most likely find a way +to change it through the documentation. Things like network requests getting whitelisted, element selection strategy +and default timeouts are all examples of things that can be changed. Side note on timeouts - Cypress does a good job of waiting for things to happen - modify timeouts sparingly to make use of the speed improvements Cypress provides. ## We sent out a survey to developers and some quotes from them: diff --git a/_posts/2018-10-28-google-lighthouse-as-a-service-lighthouse4u.md b/_posts/2018-10-28-google-lighthouse-as-a-service-lighthouse4u.md index 0d4c6d4..fdefaea 100644 --- a/_posts/2018-10-28-google-lighthouse-as-a-service-lighthouse4u.md +++ b/_posts/2018-10-28-google-lighthouse-as-a-service-lighthouse4u.md @@ -6,6 +6,7 @@ cover: /assets/images/lh4u/cover.jpg excerpt: Lighthouse4u is an opensource API for running Google Lighthouse tests at any scale, backed by Elasticsearch and Kibana for your search and visualization needs. +canonical: https://godaddy.com/resources/news/google-lighthouse-as-a-service-lighthouse4u authors: - name: Aaron Silvas url: https://www.linkedin.com/in/aaron-silvas-5817626/ @@ -122,7 +123,7 @@ Both requirements are opensource and easy to setup if you're not already using t npm i -g lighthouse4u lh4u --config-dir ./app/config \ --config-base defaults \ - --config local \ + --config local \ -- init ``` @@ -147,7 +148,7 @@ are available, but here is a summary of the more interesting bits: your own `custom` provider via `customPath`. We use this for JWT internally. * `http.routes` - Allows you to extend your LH4U instance with your own custom routes. This can be handy if you need to extend the behavior of your server. -* `lighthouse.config` - All LH settings can be overridden to fit your needs. +* `lighthouse.config` - All LH settings can be overridden to fit your needs. * `lighthouse.validate` - A handy feature in cases where you need to verify that the responding page is who and what you think before you record the LH results of an incorrect page. Useful in cases where there may be DNS transitions. Plug in @@ -161,7 +162,7 @@ are available, but here is a summary of the more interesting bits: -## Dynamic Pipelines +## Dynamic Pipelines We've got a ton of useful data, but what can we do with it automagically? In the case of a CICD pipeline, instead of surfacing the results, nothing prevents you from diff --git a/_posts/2018-11-05-reduxful-manage-restful-data-with-redux.md b/_posts/2018-11-05-reduxful-manage-restful-data-with-redux.md index e6b50d0..16973d3 100644 --- a/_posts/2018-11-05-reduxful-manage-restful-data-with-redux.md +++ b/_posts/2018-11-05-reduxful-manage-restful-data-with-redux.md @@ -6,6 +6,7 @@ cover: /assets/images/reduxful/cover.png excerpt: Introducing Reduxful, an open source project which aims to reduce the boilerplate for managing RESTful data with Redux by generating actions, reducers, and selectors for you. +canonical: https://godaddy.com/resources/news/reduxful-manage-restful-data-with-redux authors: - name: Andrew Gerard url: https://www.linkedin.com/in/andrewgerard/ @@ -16,7 +17,7 @@ As you may know, a web app's client-side state is often related to data requested from RESTful services. There are several approaches to managing this relationship, much of it depending on the technology stack you are working with. At GoDaddy, we have standardized on building web apps with React and using Redux -for state management. We have recently open sourced a project to help manage +for state management. We have recently open sourced a project to help manage RESTful data with Redux which we are now introducing, titled **Reduxful**. Utilizing Redux to keep track of your requested data has many benefits. @@ -173,7 +174,7 @@ logic put in place. While it may not appear to be a _ton_ of code above, remember that this is for requests to only two endpoints. This code will scale linearly as more endpoints are added to the app. -The complexity grows when you start to add additional features. Say you need to +The complexity grows when you start to add additional features. Say you need to start tracking additional details of a request such as duration or start and end times. Also, note that we have no error handling above! This is an additional implementation detail that will also scale linearly with each @@ -184,7 +185,7 @@ endpoint you need to add. Now that we have our Redux tools in place, let us see how we would use them in a simple React app. Our app will have a top-level component to select doodads from our list response, and a detail component to show our item response based -on the selection. +on the selection. ```jsx // ViewDoodadDetails.js @@ -358,7 +359,7 @@ To mitigate this, let us now take a look at the **Reduxful** project. ### Origins -This project was born out of the development for the new hosting products +This project was born out of the development for the new hosting products web app. This new web app has the user experience goal of being a gateway for users to manage all their hosting products in a single space. The developer experience goal is to get product developers off of technology islands and to @@ -408,7 +409,7 @@ export default new Reduxful('doodadApi', apiDesc, { requestAdapter }); As you can see, setting up and interacting with a RESTful endpoint via Redux is simple and straightforward with Reduxful. No boilerplate required! Also note, you don't _have_ to use fetch. If there is another request library you prefer, -as long as you make an adapter for it, it can be used with Reduxful. +as long as you make an adapter for it, it can be used with Reduxful. With this Reduxful setup, we can delete our first example setup files. Now let us see what needs to be updated in our React code to use our new Redux tools @@ -421,7 +422,7 @@ import React, { Component } from 'react'; import PropTypes from 'prop-types'; import { connect } from 'react-redux'; + import { isLoaded } from 'reduxful'; -+ import { resourceShape } from 'reduxful/react-addons' ++ import { resourceShape } from 'reduxful/react-addons' - import { selectDoodad } from './selectors'; - import * as actionCreators from './actionCreators'; @@ -492,7 +493,7 @@ import React, { Component } from 'react'; import PropTypes from 'prop-types'; import { connect } from 'react-redux'; + import { isLoaded } from 'reduxful'; -+ import { resourceShape } from 'reduxful/react-addons' ++ import { resourceShape } from 'reduxful/react-addons' import ViewDoodadDetails from './ViewDoodadDetails'; - import { selectDoodadList } from './selectors'; @@ -609,9 +610,9 @@ different query or path parameters. As such, resources are keyed in state by an endpoint's name and the params passed to it. This allows tracking of multiple requests uniquely. -Besides the generation of Redux tooling around your APIs, Reduxful also handles -throttling of repeated requests and debouncing in-flight requests, -along with several other features for working with RESTful data in Redux. +Besides the generation of Redux tooling around your APIs, Reduxful also handles +throttling of repeated requests and debouncing in-flight requests, +along with several other features for working with RESTful data in Redux. ### From here diff --git a/_posts/2018-11-13-engaging-standups.md b/_posts/2018-11-13-engaging-standups.md index e719940..e6dbb9b 100644 --- a/_posts/2018-11-13-engaging-standups.md +++ b/_posts/2018-11-13-engaging-standups.md @@ -7,6 +7,7 @@ excerpt: When team members become disengaged in a daily scrum meeting, it can be question the value of the ceremony. By committing to decrease siloing and help each other with blockers, a team can achieve a more healthy culture and become more engaged in each others' progress. +canonical: https://godaddy.com/resources/news/engaging-standups authors: - name: Conor Fellin url: https://github.com/cfellin1 @@ -33,7 +34,7 @@ Below are a few techniques that we found helped improve our communication as a t If people are not interested in others' status updates, it may be a sign that the team is breaking into silos. Some specialization is fine, but in the case of a production incident (or someone winning the lottery and quitting) you need the whole team to have at least a shallow knowledge of all of the team's projects. -The risk of siloing increases for distributed teams, where the convenience of hallway conversations can encourage people to form informal sub-teams that share knowledge within themselves rather than with the team as a whole. +The risk of siloing increases for distributed teams, where the convenience of hallway conversations can encourage people to form informal sub-teams that share knowledge within themselves rather than with the team as a whole. There are plenty of ways to break team members out of their silos. To some extent, it should happen naturally if you are following the scrum adage of focusing on one goal at a time. It's unlikely that a contributor will be able to stick with their pet codebase if the team is collectively moving from one goal to another in order of priority. diff --git a/_posts/2018-11-20-developer-view-oozie-vs-airflow.md b/_posts/2018-11-20-developer-view-oozie-vs-airflow.md index df05577..1dfca43 100644 --- a/_posts/2018-11-20-developer-view-oozie-vs-airflow.md +++ b/_posts/2018-11-20-developer-view-oozie-vs-airflow.md @@ -3,16 +3,17 @@ layout: post title: "Data pipeline job scheduling in GoDaddy: Developer’s point of view on Oozie vs Airflow" date: 2018-11-15 12:00:00 -0800 cover: /assets/images/time.png -excerpt: This blog discusses the pros and cons of Oozie and Airflow to help you choose which scheduler to use for your data pipeline jobs. +excerpt: This blog discusses the pros and cons of Oozie and Airflow to help you choose which scheduler to use for your data pipeline jobs. It also contains a sample plugin which implements the Airflow operator. +canonical: https://godaddy.com/resources/news/developer-view-oozie-vs-airflow authors: - name: Anusha Buchireddygari url: https://www.linkedin.com/in/anushabuchireddygari/ photo: /assets/images/anusha-buchireddygari.png --- -On the Data Platform team at GoDaddy we use both Oozie and Airflow for scheduling jobs. -In the past we've found each tool to be useful for managing data pipelines but are migrating all of our jobs to Airflow because of the reasons discussed below. +On the Data Platform team at GoDaddy we use both Oozie and Airflow for scheduling jobs. +In the past we've found each tool to be useful for managing data pipelines but are migrating all of our jobs to Airflow because of the reasons discussed below. In this article, I'll give an overview of the pros and cons of using Oozie and Airflow to manage your data pipeline jobs. To help you get started with pipeline scheduling tools I've included some sample plugin code to show how simple it is to modify or add functionality in Airflow. @@ -32,7 +33,7 @@ With cron, you have to write code for the above functionality, whereas Oozie and ### Oozie ### -[Apache Oozie](https://github.com/apache/Oozie) is a workflow scheduler which uses Directed Acyclic Graphs (DAG) to schedule Map Reduce Jobs (e.g. Pig, Hive, Sqoop, Distcp, Java functions). +[Apache Oozie](https://github.com/apache/Oozie) is a workflow scheduler which uses Directed Acyclic Graphs (DAG) to schedule Map Reduce Jobs (e.g. Pig, Hive, Sqoop, Distcp, Java functions). It's an open source project written in Java. When we develop Oozie jobs, we write bundle, coordinator, workflow, properties file. A workflow file is required whereas others are optional. * The workflow file contains the actions needed to complete the job. Some of the common actions we use in our team are the Hive action to run hive scripts, ssh action, shell action, pig action and fs action for creating, moving, and removing files/folders @@ -50,7 +51,7 @@ At GoDaddy, we use Hue UI for monitoring Oozie jobs. * SLA checks can be added * Cons: - * Less flexibility with actions and dependency, for example: Dependency check for partitions should be in MM, dd, YY format, if you have integer partitions in M or d, it’ll not work. + * Less flexibility with actions and dependency, for example: Dependency check for partitions should be in MM, dd, YY format, if you have integer partitions in M or d, it’ll not work. * Actions are limited to allowed actions in Oozie like fs action, pig action, hive action, ssh action and shell action. * All the code should be on HDFS for map reduce jobs. * Limited amount of data (2KB) can be passed from one action to another. @@ -68,31 +69,31 @@ Some of the features in Airflow are: At GoDaddy, Customer Knowledge Platform team is working on creating docker for Airflow, so other teams can develop and maintain their own Airflow scheduler. * Pros: - * The Airflow UI is much better than Hue (Oozie UI),for example: Airflow UI has a Tree view to track task failures unlike Hue, which tracks only job failure. + * The Airflow UI is much better than Hue (Oozie UI),for example: Airflow UI has a Tree view to track task failures unlike Hue, which tracks only job failure. * The Airflow UI also lets you view your workflow code, which the Hue UI does not. * More flexibility in the code, you can write your own operator plugins and import them in the job. * Allows dynamic pipeline generation which means you could write code that instantiates a pipeline dynamically. - * Contains both event-based trigger and time-based trigger. - Event based trigger is so easy to add in Airflow unlike Oozie. - Event based trigger is particularly useful with data quality checks. - Suppose you have a job to insert records into database but you want to verify whether an insert operation is successful so you would write a query to check record count is not zero. + * Contains both event-based trigger and time-based trigger. + Event based trigger is so easy to add in Airflow unlike Oozie. + Event based trigger is particularly useful with data quality checks. + Suppose you have a job to insert records into database but you want to verify whether an insert operation is successful so you would write a query to check record count is not zero. In Airflow, you could add a data quality operator to run after insert is complete where as in Oozie, since it's time based, you could only specify time to trigger data quality job. * Lots of functionalities like retry, SLA checks, Slack notifications, all the functionalities in Oozie and more. * Disable jobs easily with an on/off button in WebUI whereas in Oozie you have to remember the jobid to pause or kill the job. - + * Cons: * In 2018, Airflow code is still an incubator. There is large community working on the code. * Manually delete the filename from meta information if you change the filename. - * You need to learn python programming language for scheduling jobs. + * You need to learn python programming language for scheduling jobs. For Business analysts who don't have coding experience might find it hard to pick up writing Airflow jobs but once you get hang of it, it becomes easy. - * When concurrency of the jobs increases, no new jobs will be scheduled. - Sometimes even though job is running, tasks are not running , this is due to number of jobs running at a time can affect new jobs scheduled. + * When concurrency of the jobs increases, no new jobs will be scheduled. + Sometimes even though job is running, tasks are not running , this is due to number of jobs running at a time can affect new jobs scheduled. This also causes confusion with Airflow UI because although your job is in run state, tasks are not in run state. ### What works for your Organization? (Oozie or Airflow) -Airflow has so many advantages and there are many companies moving to Airflow. +Airflow has so many advantages and there are many companies moving to Airflow. There is an active community working on enhancements and bug fixes for Airflow. A few things to remember when moving to Airflow: * You have to take care of scalability using Celery/Mesos/Dask. @@ -125,7 +126,7 @@ class FileSensorOperator(BaseSensorOperator): super(FileSensorOperator, self).__init__(*args, **kwargs) self.file_path = file_path self.file_pattern = file_pattern - + # poke is standard method used in built-in operators def poke(self, context): file_location = self.file_path @@ -147,12 +148,12 @@ class FilePlugin(AirflowPlugin): ###### Airflow DAG -The below code uses an Airflow DAGs (Directed Acyclic Graph) to demonstrate how we call the sample plugin implemented above. -In this code the default arguments include details about the time interval, start date, and number of retries. +The below code uses an Airflow DAGs (Directed Acyclic Graph) to demonstrate how we call the sample plugin implemented above. +In this code the default arguments include details about the time interval, start date, and number of retries. You can add additional arguments to configure the DAG to send email on failure, for example. The DAG is divided into 3 tasks. -* The first task is to call the sample plugin which checks for the file pattern in the path every 5 seconds and get the exact file name. +* The first task is to call the sample plugin which checks for the file pattern in the path every 5 seconds and get the exact file name. * The second task is to write to the file. * The third task is to archive the file. @@ -198,7 +199,7 @@ def process_file(**context): # Call python function which writes to file proccess_task = PythonOperator( - task_id='process_the_file', + task_id='process_the_file', python_callable=process_file, dag=dag) @@ -208,9 +209,9 @@ archive_task = ArchiveFileOperator( filepath=file_path, archivepath=archive_path, dag=dag) - + # This line tells the sequence of tasks called -sensor_task >> proccess_task >> archive_task # ">>" is airflow operator used to indicate sequence of the workflow +sensor_task >> proccess_task >> archive_task # ">>" is airflow operator used to indicate sequence of the workflow ``` Our team has written similar plugins for data quality checks. Unlike Oozie, Airflow code allows code flexibility for tasks which makes development easy. If you're thinking about scaling your data pipeline jobs I'd recommend Airflow as a great place to get started. diff --git a/_posts/2018-12-14-require-suffix.md b/_posts/2018-12-14-require-suffix.md index 256fd5e..bac559a 100644 --- a/_posts/2018-12-14-require-suffix.md +++ b/_posts/2018-12-14-require-suffix.md @@ -6,6 +6,7 @@ cover: /assets/images/require-suffix/cover.jpg excerpt: require-suffix is an opensource package to shim Node.js's require to optionally load different files based on platform and file extensions. It ships with custom presets for handling ios, android, and native files targeting react-native. options: - full-bleed-cover +canonical: https://godaddy.com/resources/news/require-suffix authors: - name: Michael Luther url: https://github.com/msluther @@ -114,4 +115,4 @@ Hopefully, you find this project as useful as I have. You can read more and get [`react-native`]: https://facebook.github.io/react-native/ [platform-specific extensions]: https://facebook.github.io/react-native/docs/platform-specific-code#platform-specific-extensions [`mocha`]: https://mochajs.org/ -[monkey-patches]: https://en.wikipedia.org/wiki/Monkey_patch \ No newline at end of file +[monkey-patches]: https://en.wikipedia.org/wiki/Monkey_patch diff --git a/_posts/2018-12-20-python-metaclasses.md b/_posts/2018-12-20-python-metaclasses.md index 91f701e..be8210a 100644 --- a/_posts/2018-12-20-python-metaclasses.md +++ b/_posts/2018-12-20-python-metaclasses.md @@ -7,6 +7,7 @@ cover-source: https://www.flickr.com/photos/yukop/6822664892/ excerpt: Python's metaclasses are an obscure and often misunderstood feature of the language. This post introduces readers to metaclasses hands-on by implementing interfaces, motivated by Python's abstract base class, or ABC. options: - full-bleed-cover +canonical: https://godaddy.com/resources/news/python-metaclasses authors: - name: Joseph Bergeron url: http://joebergeron.io @@ -122,7 +123,7 @@ def abstractfunc(func): return func ``` -With our decorator in place, let's fill in some boilerplate. We'll want to define a custom metaclass, with dummy methods `__init__` and `__new__`, and have our desired abstract base class inherit from it. Note that the name `Interface` for our metaclass below has nothing to do with the "Interface" in our `NetworkInterface` class -- we could've named `Interface` anything we want. +With our decorator in place, let's fill in some boilerplate. We'll want to define a custom metaclass, with dummy methods `__init__` and `__new__`, and have our desired abstract base class inherit from it. Note that the name `Interface` for our metaclass below has nothing to do with the "Interface" in our `NetworkInterface` class -- we could've named `Interface` anything we want. ```python class Interface(type): diff --git a/_posts/2019-02-26-software-vpn-channel.md b/_posts/2019-02-26-software-vpn-channel.md index 3999bf7..a6344b3 100644 --- a/_posts/2019-02-26-software-vpn-channel.md +++ b/_posts/2019-02-26-software-vpn-channel.md @@ -4,6 +4,7 @@ title: "Connecting an On-Premises Data Center to AWS with HA Software VPN Tunnel date: 2019-02-26 09:00:00 -0800 cover: /assets/images/ha-openvpn/cover.jpg excerpt: When our team started to deploy our services to Amazon cloud, there was a demand to connect from Amazon VPC back to our On-Premises data center. This post describes how we build HA software VPN tunnels. +canonical: https://godaddy.com/resources/news/software-vpn-channel authors: - name: Kewei Lu url: https://www.linkedin.com/in/kewei-lu-216b433a @@ -20,7 +21,7 @@ In general, there are two ways to build the connection: [AWS direct connect](htt To configure a high-availability OpenVPN server on AWS, we used the Active-Passive HA configuration. We set up two OpenVPN servers, one primary and one secondary. We ran them simultaneously on two container instances/EC2 instances in the ECS cluster. Each container instance belonged to an auto-scaling group with a desired count 1. For each auto-scaling group, there was a dedicated auto-scaling launch configuration associated with it. In the launch configuration, we copied the OpenVPN server certs from an S3 bucket to the instance. Also, we assigned an Elastic IP to the container instance to make sure its IP address is persistent after reboot. Then, we connected each OpenVPN Server to an OpenVPN client set up on a GoDaddy VM. This gave us two OpenVPN tunnels. -To facilitate the OpenVPN server and client setup, we also created server and client side docker images. We pushed the images to the docker registry. Then, we could set up the server or client by pulling and running the docker images. +To facilitate the OpenVPN server and client setup, we also created server and client side docker images. We pushed the images to the docker registry. Then, we could set up the server or client by pulling and running the docker images. During any time, only one OpenVPN server (Primary OpenVPN Server) is actively being used. All traffic from AWS to the On-Premises data centers will go through that OpenVPN server. We have a CloudWatch rule defined for AWS ECS task state change event. Based on the event received, the rule will trigger a lambda function to update the route table and promote the secondary server as the primary server if the primary OpenVPN server is down. The figure below shows one such event. diff --git a/_posts/2019-03-06-dynamic-configuration-for-nodejs.md b/_posts/2019-03-06-dynamic-configuration-for-nodejs.md index c3bafe7..0362cbc 100644 --- a/_posts/2019-03-06-dynamic-configuration-for-nodejs.md +++ b/_posts/2019-03-06-dynamic-configuration-for-nodejs.md @@ -4,6 +4,7 @@ title: "Dynamic Configuration for Node.js Applications" date: 2019-03-06 12:00:00 -0700 cover: /assets/images/headers/flipr.jpg excerpt: Dynamic configuration is a powerful tool for software applications. Use it to solve problems like authorization, feature flags, and A/B tests, in addition to normal application configuration. See how GoDaddy uses a library called flipr to achieve this for some of its Node.js applications. +canonical: https://godaddy.com/resources/news/dynamic-configuration-for-nodejs authors: - name: Grant Shively url: https://github.com/gshively11 diff --git a/_posts/2019-04-02-addhoc.md b/_posts/2019-04-02-addhoc.md index 51741a6..37efea7 100644 --- a/_posts/2019-04-02-addhoc.md +++ b/_posts/2019-04-02-addhoc.md @@ -4,6 +4,7 @@ title: "Making React HOC functions the easy way with addhoc" date: 2019-04-02 09:00:00 -0700 cover: /assets/images/headers/addhoc-cover.jpg excerpt: As defined in the React documentation, a higher-order component, or HOC, is a function that returns a React component that wraps a specified child component and often provides augmented functionality. Implementing HOCs can be hard when considering hoisting statics, managing ref forwarding, and handling display name. addhoc is a newly released open-source package that aims to handle these challenges for you. +canonical: https://godaddy.com/resources/news/addhoc authors: - name: Jonathan Keslin title: Director of Engineering, UXCore diff --git a/_posts/2019-04-09-announcing-exemplar.md b/_posts/2019-04-09-announcing-exemplar.md index 1e53954..1e8bcc0 100644 --- a/_posts/2019-04-09-announcing-exemplar.md +++ b/_posts/2019-04-09-announcing-exemplar.md @@ -4,6 +4,7 @@ title: "Creating better examples with @exemplar/storybook" date: 2019-04-09 09:00:00 -0700 cover: /assets/images/exemplar/cover.png excerpt: We're announcing the release of @exemplar/storybook! Exemplar is a way to write examples for your React components with less boilerplate storybook config. Do more by writing less. +canonical: https://godaddy.com/resources/news/announcing-exemplar authors: - name: Sivan Mehta title: Software Engineer, Experience Delivery diff --git a/_posts/2019-04-16-kubernetes-external-secrets.md b/_posts/2019-04-16-kubernetes-external-secrets.md index 2921c49..d962ff6 100644 --- a/_posts/2019-04-16-kubernetes-external-secrets.md +++ b/_posts/2019-04-16-kubernetes-external-secrets.md @@ -4,6 +4,7 @@ title: "Kubernetes External Secrets" date: 2019-04-16 09:00:00 -0700 cover: /assets/images/kubernetes-external-secrets/cover.jpg excerpt: Engineering teams at GoDaddy use Kubernetes with secret management systems, like AWS Secrets Manager. "External" secret management systems often provide useful features, such as rotation, that the native Kubernetes Secret object does not support. Kubernetes External Secrets is a new open source project that introduces the ExternalSecret object type. With an ExternalSecret object, an engineering team can manage its secret data in an external system and access that data in the same way they would if they were using a Secret object. +canonical: https://godaddy.com/resources/news/kubernetes-external-secrets authors: - name: Silas Boyd-Wickizer title: Sr. Director of Engineering diff --git a/_posts/2019-04-25-domain-connect.md b/_posts/2019-04-25-domain-connect.md index c71d94a..060647f 100644 --- a/_posts/2019-04-25-domain-connect.md +++ b/_posts/2019-04-25-domain-connect.md @@ -4,6 +4,7 @@ title: "Creating the Domain Connect Standard" date: 2019-04-25 09:00:00 -0700 cover: /assets/images/domain-connect/cover.png excerpt: Domain Connect is an open standard that makes it easier for users of services like Squarespace or O365 to configure DNS without having to understand the details. The protocol involves two parties. The first is the Service Provider whose user wants to configure DNS to enable the service, and the other is the DNS Provider. The most immediate reaction to it is usually 'This is a no-brainer'. But how did it get created and evolve? How can it help others? +canonical: https://godaddy.com/resources/news/domain-connect authors: - name: Arnold Blinn title: Chief Architect @@ -15,9 +16,9 @@ authors: A few years ago we noticed something at GoDaddy. Third party services for email (e.g. O365 or G Suite) or web hosting (e.g. Squarespace or Shopify) were becoming more popular, and our customers were struggling to properly configure DNS. Even with the best instructions, this continues to be a high barrier for many users. They struggle with making these changes. So services end up not being configured. -To fix this, we started working with some of these third parties and developed a simple protocol and experience that allowed customers to setup these applications without having to worry about the specifics of the DNS records. A “one click” configuration. +To fix this, we started working with some of these third parties and developed a simple protocol and experience that allowed customers to setup these applications without having to worry about the specifics of the DNS records. A “one click” configuration. -We got this working with about a dozen different services when we realized something. There wasn’t any rocket science in what we were doing; the protocol we developed was, largely speaking, a simple and properly formatted web-based link from the Service Provider to us. So why not turn it into an open standard? We took our protocol, filled in a few gaps, and generalized it up to make it more standards friendly. +We got this working with about a dozen different services when we realized something. There wasn’t any rocket science in what we were doing; the protocol we developed was, largely speaking, a simple and properly formatted web-based link from the Service Provider to us. So why not turn it into an open standard? We took our protocol, filled in a few gaps, and generalized it up to make it more standards friendly. Out of this process we created Domain Connect. @@ -31,7 +32,7 @@ This is a complex operation for users and they often get lost or confused, resul Domain Connect solves this problem for the user. The protocol has two components. -The first is in the “discovery” stage of the protocol. Having a hard-coded table of nameservers to determine the DNS Provider is error prone. So instead of doing a query to the TLD for the nameserver, the Service Provider can query the `_domainconnect` TXT record directly from DNS for the domain and determine the DNS Provider. +The first is in the “discovery” stage of the protocol. Having a hard-coded table of nameservers to determine the DNS Provider is error prone. So instead of doing a query to the TLD for the nameserver, the Service Provider can query the `_domainconnect` TXT record directly from DNS for the domain and determine the DNS Provider. The second component makes the changes to DNS. For this the Service Provider will have first onboarded a template of changes to enable their service with the DNS Provider. Now when the user types their domain name, the Service Provider links to the DNS Provider providing (amongst other data) the domain name, the template, and any other settings. The DNS Provider signs the user in, verifies the user owns the domain name, confirms the change with the user, and makes changes to DNS by applying the template. @@ -47,17 +48,17 @@ Our next step was to gain adoption. We already had a dozen plus Service Provide This all changed at a Hackathon at Cloudfest in the spring of 2017. Some engineers from GoDaddy, Host Europe Group, and United Domains got together and implemented two projects. The first was to add Domain Connect support to United Domains. The other project was to build a simple example Service Provider. The latter has since evolved, but can be found at [https://exampleservice.domainconnect.org](https://exampleservice.domainconnect.org). -At the end of the hackathon we successfully demonstrated configuring our new example service with a domain at United Domains and at GoDaddy. Coincidently the MC of the hackathon was Paul Mockapetris, who along with Jon Postel is credited as a co-inventor of DNS. A highlight was when we explained the reason for doing this was that normal users don’t understand DNS. Paul is a good-natured person who appreciated and largely agreed with this jab. +At the end of the hackathon we successfully demonstrated configuring our new example service with a domain at United Domains and at GoDaddy. Coincidently the MC of the hackathon was Paul Mockapetris, who along with Jon Postel is credited as a co-inventor of DNS. A highlight was when we explained the reason for doing this was that normal users don’t understand DNS. Paul is a good-natured person who appreciated and largely agreed with this jab. -After the hackathon things really took off. United Domains recruited 1&1 which launched an implementation. This led to several more DNS Providers. The Service Providers now had more incentive to implement the protocol. Other companies like Microsoft and Automattic got behind it. +After the hackathon things really took off. United Domains recruited 1&1 which launched an implementation. This led to several more DNS Providers. The Service Providers now had more incentive to implement the protocol. Other companies like Microsoft and Automattic got behind it. ## Providing more Customer Value -Building on this we decided to do some more projects at the Cloudfest Hackathon in 2018. This time we helped Plesk add support for the protocol, both as a DNS and Service Provider. +Building on this we decided to do some more projects at the Cloudfest Hackathon in 2018. This time we helped Plesk add support for the protocol, both as a DNS and Service Provider. -We also decided to build something useful for customers. We wondered if we could build a Dynamic DNS (DDNS) application using Domain Connect. This allows a server that uses DHCP and gets a dynamic IP address to update a DNS entry whenever the IP address changes. This functionality was popular in the late 1990s with some routers and DNS provides supporting proprietary protocols. While not as commonly used today, some small business customers and advanced users still use this capability. +We also decided to build something useful for customers. We wondered if we could build a Dynamic DNS (DDNS) application using Domain Connect. This allows a server that uses DHCP and gets a dynamic IP address to update a DNS entry whenever the IP address changes. This functionality was popular in the late 1990s with some routers and DNS provides supporting proprietary protocols. While not as commonly used today, some small business customers and advanced users still use this capability. -As you may guess, we were successful and built a [nifty little Windows Application](https://github.com/Domain-Connect/DomainConnectDDNS-Windows) that does this. It runs as a Windows Service or as a System Tray Icon (later we also built Linux versions). It uses the Domain Connect protocol to update an A record whenever your IP address changes. With a short TTL, this is what DDNS does. +As you may guess, we were successful and built a [nifty little Windows Application](https://github.com/Domain-Connect/DomainConnectDDNS-Windows) that does this. It runs as a Windows Service or as a System Tray Icon (later we also built Linux versions). It uses the Domain Connect protocol to update an A record whenever your IP address changes. With a short TTL, this is what DDNS does. Note: For this implementation the Domain Connect is implemented using OAuth. The end user grants permission for the application to update DNS using Domain Connect on their behalf. @@ -71,7 +72,7 @@ This application uses OAuth to call the same API at different providers. It talk Of course coming out of this hackathon participants from multiple companies helped to improve and evolve the specification. It has since evolved and is supported by over 40 companies with contributors from a wide variety of them, all listed at [https://domainconnect.org](https://domainconnect.org). -As time passed, more Service Providers onboarded. This included G Suite from Google. +As time passed, more Service Providers onboarded. This included G Suite from Google. ## Removing Barriers for DNS Providers @@ -79,12 +80,12 @@ One challenge we continued to face was getting more DNS Providers onboard. They So we went into our third year at the Cloudfest Hackathon with a goal to solve this problem. -We built a reference implementation for DNS Providers. This library was used to build a proof of concept on top of PowerDNS and Bind. +We built a reference implementation for DNS Providers. This library was used to build a proof of concept on top of PowerDNS and Bind. Like all the open source examples as part of Domain Connect, this can be found at [https://www.domainconnect.org/code/](https://www.domainconnect.org/code/). We currently have several major DNS Providers leveraging this library and launching their implementations in the coming months. - + ## The Future At GoDaddy, we continue to onboard Service Providers onto the platform. And we are looking forward to working with the community to push forward the spec. We also enjoy and will continue to work with the other DNS Providers to help them onboard to the protocol. This helps consumers and makes the Internet easier to use. They say a rising tide lifts all boats, and we feel that Domain Connect is a great ‘tide’. diff --git a/_posts/2019-05-22-testing-react-native-using-ekke.md b/_posts/2019-05-22-testing-react-native-using-ekke.md index 215b65e..6295dd9 100644 --- a/_posts/2019-05-22-testing-react-native-using-ekke.md +++ b/_posts/2019-05-22-testing-react-native-using-ekke.md @@ -4,6 +4,7 @@ title: "Testing React-Native using ekke" date: 2019-05-22 09:00:00 -0700 cover: /assets/images/ekke/react-phone.png excerpt: Introducing `ekke`, a new, unique test runner for React-Native. It allows you to execute your test code directly on the device, eliminating the need for imperfect mocks and enabling you to test in the same environment as your production users. +canonical: https://godaddy.com/resources/news/testing-react-native-using-ekke authors: - name: Arnout Kazemier title: Principal Software Engineer diff --git a/_posts/2019-06-18-react-native-community-contribution-datetimepicker-component.md b/_posts/2019-06-18-react-native-community-contribution-datetimepicker-component.md index fea7ac5..a20ec7b 100644 --- a/_posts/2019-06-18-react-native-community-contribution-datetimepicker-component.md +++ b/_posts/2019-06-18-react-native-community-contribution-datetimepicker-component.md @@ -4,6 +4,7 @@ title: "React Native Community contribution" date: 2019-06-17 09:00:00 -0700 cover: /assets/images/datetimepicker/calendar.jpg excerpt: GoDaddy contributed to the lean-core initiative by extracting and merging the DatePicker and TimePicker components so that we could us them in our mobile app. The new Component has fewer platform-specific implementations and is easier to maintain and use. +canonical: https://godaddy.com/resources/news/react-native-community-contribution-datetimepicker-component authors: - name: Martijn Swaagman title: Principal Software Engineer diff --git a/_posts/2019-06-25-asherah-opensource-app-encryption-sdk.md b/_posts/2019-06-25-asherah-opensource-app-encryption-sdk.md index 0b17b6e..87093f5 100644 --- a/_posts/2019-06-25-asherah-opensource-app-encryption-sdk.md +++ b/_posts/2019-06-25-asherah-opensource-app-encryption-sdk.md @@ -4,6 +4,7 @@ title: "Asherah: An Application Encryption SDK" date: 2019-07-09 09:00:00 -0700 cover: /assets/images/asherah/encryption.jpg excerpt: Enterprise data encryption is difficult, error-prone and problematic to scale. In particular, managing key rotation and limiting the blast radius of a leaked private key are difficult problems. GoDaddy is releasing its proposed solution to this problem as open source. It's an Application Encryption SDK called Asherah. Asherah's foundational principle is that you plug in your choice of key management services and then use it to manage your hierarchical key set and encrypt data using a method known as envelope encryption. We're an incubator project and currently in a request-for-feedback phase as we test the implementation internally. +canonical: https://godaddy.com/resources/news/asherah-opensource-app-encryption-sdk authors: - name: Nikhil Lohia title: Software Engineer @@ -19,80 +20,80 @@ authors: photo: https://avatars3.githubusercontent.com/u/684963?s=60&v=4 --- -> "...Most Creation myths begin with a 'paradoxical unity of everything, evaluated either as chaos or as Paradise,' and the -> world as we know it does not really come into being until this is changed. I should point out here that Enki's original name +> "...Most Creation myths begin with a 'paradoxical unity of everything, evaluated either as chaos or as Paradise,' and the +> world as we know it does not really come into being until this is changed. I should point out here that Enki's original name > was En-Kur, Lord of Kur. Kur was a primeval ocean -- Chaos -- that Enki conquered." > > "Every hacker can identify with that." > > "But Asherah has similar connotations. Her name in Ugaritic, 'atiratu yammi' means 'she who treads on (the) sea (dragon).'" > -> "Okay, so both Enki and Asherah were figures who had in some sense defeated chaos. And your point is that this defeat of +> "Okay, so both Enki and Asherah were figures who had in some sense defeated chaos. And your point is that this defeat of > chaos, the separation of the static, unified world into a binary system, is identified with creation." > > "Correct." -> Ng mumbles something and a card appears in his hand. "Here's a new version of the system software," he says. "It should be a +> Ng mumbles something and a card appears in his hand. "Here's a new version of the system software," he says. "It should be a > little less buggy." -> +> > "A little less?" -> +> > "No piece of software is ever bug free," Ng says. -> +> > Uncle Enzo says, "I guess there's a little bit of Asherah in all of us." > > -- Snow Crash, Neal Stephenson -Developers often write software that handles sensitive data like customer information. Best practice and company standards -dictate that this data should be encrypted at multiple levels: at rest, in transit and at the application. Easy-to-use -solutions exist for encryption at rest, like encrypted block stores, and for encryption in transit, like TLS, but writing -solid code for application-level encryption is still challenging. Common problems to tackle include choosing a good -cryptographic technique, generating keys and managing them properly, preventing memory scanning attacks and rotating keys. For -example, if you encrypt everything with one key and it is compromised, rotating the key and decrypting-then-re-encrypting all +Developers often write software that handles sensitive data like customer information. Best practice and company standards +dictate that this data should be encrypted at multiple levels: at rest, in transit and at the application. Easy-to-use +solutions exist for encryption at rest, like encrypted block stores, and for encryption in transit, like TLS, but writing +solid code for application-level encryption is still challenging. Common problems to tackle include choosing a good +cryptographic technique, generating keys and managing them properly, preventing memory scanning attacks and rotating keys. For +example, if you encrypt everything with one key and it is compromised, rotating the key and decrypting-then-re-encrypting all of the data is expensive and time consuming. As we have made the transition to cloud native architectures and are well underway moving many services and applications -to AWS, we have continued to focus significant attention on always improving our security posture. We considered how we +to AWS, we have continued to focus significant attention on always improving our security posture. We considered how we could address problems surrounding encryption, key rotation and blast radius reduction as a company rather than leaving -these comparatively difficult problems to each team to solve. As a result, we are delighted to present **Asherah**: an -easy-to-use SDK which abstracts away the complexity of advanced encryption techniques and risk mitigation at enterprise scale. -**Asherah** makes use of **envelope encryption** and **hierarchical keys**. In envelope encryption, the key used to encrypt a -data element is itself encrypted by a separate, *higher order* key and the encrypted key value is stored *with the data*. -These higher order keys form a hierarchy of keys that partition the key space and data, reducing the blast radius of a -compromise and allowing for novel approaches to incremental rotation. **Asherah** abstracts away the complexity of managing -that system, letting developers interact with data and encryption/decryption in standard ways with familiar APIs while +these comparatively difficult problems to each team to solve. As a result, we are delighted to present **Asherah**: an +easy-to-use SDK which abstracts away the complexity of advanced encryption techniques and risk mitigation at enterprise scale. +**Asherah** makes use of **envelope encryption** and **hierarchical keys**. In envelope encryption, the key used to encrypt a +data element is itself encrypted by a separate, *higher order* key and the encrypted key value is stored *with the data*. +These higher order keys form a hierarchy of keys that partition the key space and data, reducing the blast radius of a +compromise and allowing for novel approaches to incremental rotation. **Asherah** abstracts away the complexity of managing +that system, letting developers interact with data and encryption/decryption in standard ways with familiar APIs while offering a very high level of protection against compromise and data loss. Like alternative libraries such as [Google's Tink](https://github.com/google/tink), we are careful to provide only those encryption algorithms that are known secure and initialize them in conformance with best practices. Our initially supported algorithm is AES256-GCM and we plan -to provide interfaces for adding others while supporting and including only those that are known to be safe to use. A more +to provide interfaces for adding others while supporting and including only those that are known to be safe to use. A more detailed explanation of how our goals contrast with other open source alternatives and why we chose to propose our own SDK, see **Related Work** below. -**Asherah** is an incubator project and we are currently testing internally. In addition, we have a roadmap that includes -plans to have third-party security audits of the code for every supported language. Our goal in open sourcing it is to -invite the security community and the developer community at large to help us evaluate, test and iterate on this solution so +**Asherah** is an incubator project and we are currently testing internally. In addition, we have a roadmap that includes +plans to have third-party security audits of the code for every supported language. Our goal in open sourcing it is to +invite the security community and the developer community at large to help us evaluate, test and iterate on this solution so that we can help developers manage private data more securely. ## Using Asherah -We wanted to make it easy for developers to write code that manages customer data without being forced to implement +We wanted to make it easy for developers to write code that manages customer data without being forced to implement important features like key rotation and hierarchical key structures from scratch. The API itself is easy to use. ### Step 1: Create a session factory Each encryption context is wrapped in a new session that is produced from a factory method. The session contains details on the particular keys from a key hierarchy that will be used, a caching policy, a key rotation policy and the configuration -of how performance metrics will be logged. A session is required for any encryption/decryption operations. For simplicity, -the session factory uses the builder pattern, specifically a step builder. This ensures all required properties are set before -a factory is built. +of how performance metrics will be logged. A session is required for any encryption/decryption operations. For simplicity, +the session factory uses the builder pattern, specifically a step builder. This ensures all required properties are set before +a factory is built. -To obtain an instance of the builder, use the static factory method `newBuilder`. Once you have a builder, you can -use the `with` setter methods to configure the session factory properties. Below is an example of a +To obtain an instance of the builder, use the static factory method `newBuilder`. Once you have a builder, you can +use the `with` setter methods to configure the session factory properties. Below is an example of a session factory that uses in-memory persistence and static key management. ```java AppEncryptionSessionFactory appEncryptionSessionFactory = AppEncryptionSessionFactory - .newBuilder("productId", "systemId") + .newBuilder("productId", "systemId") .withMemoryPersistence() .withNeverExpiredCryptoPolicy() .withStaticKeyManagementService("secretmasterkey!") // hard-coded/static master key @@ -100,16 +101,16 @@ AppEncryptionSessionFactory appEncryptionSessionFactory = AppEncryptionSessionFa .build()) ``` -We recommend that every service have its own session factory, preferably as a singleton instance within the -service. This will allow you to leverage caching and minimize resource usage. Always remember to close the -session factory before exiting the service to ensure that all resources held by the factory, including the +We recommend that every service have its own session factory, preferably as a singleton instance within the +service. This will allow you to leverage caching and minimize resource usage. Always remember to close the +session factory before exiting the service to ensure that all resources held by the factory, including the cache, are disposed of properly. ### Step 2: Create a session -Now that we have session factory, we need to create a session to be able to actually encrypt/decrypt any data. Use the factory -created in step 1 to do this. The payload and data row record types can be specified while creating the session. These are +Now that we have session factory, we need to create a session to be able to actually encrypt/decrypt any data. Use the factory +created in step 1 to do this. The payload and data row record types can be specified while creating the session. These are currently restricted to JSON objects and byte arrays. ```java @@ -124,22 +125,22 @@ the resources properly. ### Step 3: Use the session to accomplish the cryptographic task -We are now ready to use **Asherah** to encrypt and decrypt data. **Asherah** supports two usage patterns. We'll use the -simpler encrypt/decrypt pattern for the purpose of this post. For usage details of the advanced load/store +We are now ready to use **Asherah** to encrypt and decrypt data. **Asherah** supports two usage patterns. We'll use the +simpler encrypt/decrypt pattern for the purpose of this post. For usage details of the advanced load/store pattern, [please check out our public repo on GitHub](https://github.com/godaddy/asherah). Encrypt/Decrypt: -This usage style is similar to common encryption utilities where payloads are simply encrypted and decrypted, and +This usage style is similar to common encryption utilities where payloads are simply encrypted and decrypted, and it is completely up to the calling application for storage responsibility. ```java String originalPayloadString = "mysupersecretpayload"; -// encrypt the payload +// encrypt the payload byte[] dataRowRecordBytes = encryptionSessionBytes.encrypt(originalPayloadString.getBytes(StandardCharsets.UTF_8)); -// decrypt the payload +// decrypt the payload String decryptedPayloadString = new String(encryptionSessionBytes.decrypt(newBytes), StandardCharsets.UTF_8); ``` @@ -152,44 +153,44 @@ Here is a diagram showing at a high level a typical encryption operation in **As Features: -* **Easy incremental key rotation and blast radius reduction**: **Asherah** generates cryptographically strong keys and -arranges them in a hierarchy, enhancing the value provided by envelope encryption. The hierarchical key model also encourages +* **Easy incremental key rotation and blast radius reduction**: **Asherah** generates cryptographically strong keys and +arranges them in a hierarchy, enhancing the value provided by envelope encryption. The hierarchical key model also encourages frequent key rotation which limits the blast radius in case of a security breach. These key rotations happen automatically as you encrypt and decrypt data according to the *crypto policy* you use in your session. Behind the scenes, **Asherah** considers whether keys are revoked, stale or otherwise in need of rotation and decrypts and re-encrypts your data and rotates your keys. -* **User configurable key management service**: **Asherah** can integrate with master key management services using a +* **User configurable key management service**: **Asherah** can integrate with master key management services using a pluggable key management service interface, allowing it to be cloud agnostic or support on-premise implementations. * **User configurable datastore**: **Asherah** manages generated data keys via a pluggable datastore, providing you with a flexible architecture. -* **In-memory key protection against a growing number of key hijacking attacks**: **Asherah** takes advantage of our **Secure -Memory** library, which makes use of native calls and off-heap memory to secure keys. This protects against several memory -investigation attacks such as scanning memory directly via proc, forcing a process to page to disk to recapture process memory -and trigging a core dump. As we continue to implement new ways to protect memory and pair these with recommended system level -settings (such as, on Linux, setting /proc/sys/kernel/yama/ptrace_scope to a restrictive value), the protections we add to +* **In-memory key protection against a growing number of key hijacking attacks**: **Asherah** takes advantage of our **Secure +Memory** library, which makes use of native calls and off-heap memory to secure keys. This protects against several memory +investigation attacks such as scanning memory directly via proc, forcing a process to page to disk to recapture process memory +and trigging a core dump. As we continue to implement new ways to protect memory and pair these with recommended system level +settings (such as, on Linux, setting /proc/sys/kernel/yama/ptrace_scope to a restrictive value), the protections we add to this library give Asherah's internal key caches greater resilience to attack. As a developer, the three primary external resources you interact with are the `KeyManagementService`, the `Metastore` and the -`AppEncryptionSessionFactory`. The `KeyManagementService` is used to integrate with a service, typically a cloud provider's -core key management implementation, that manages the master key you use as the root for our hierarchical key model. The -`Metastore` is the backing datastore **Asherah** used to manage the data keys it generates to construct the hierarchical -model. Both of these interfaces follow a pluggable model so that **Asherah** remains highly extensible for the diversity of +`AppEncryptionSessionFactory`. The `KeyManagementService` is used to integrate with a service, typically a cloud provider's +core key management implementation, that manages the master key you use as the root for our hierarchical key model. The +`Metastore` is the backing datastore **Asherah** used to manage the data keys it generates to construct the hierarchical +model. Both of these interfaces follow a pluggable model so that **Asherah** remains highly extensible for the diversity of use-cases that must be managed in enterprise scale environments. Finally, the `AppEncryptionSessionFactory` is where you -initialize your encryption or decryption context. A helpful and configurable `CryptoPolicy` is initialized in this conext and -it wraps and manages the complexity of key rotation schedules and caching behavior, among other things. Future **Asherah** +initialize your encryption or decryption context. A helpful and configurable `CryptoPolicy` is initialized in this conext and +it wraps and manages the complexity of key rotation schedules and caching behavior, among other things. Future **Asherah** features will primarily be exposed via the policy. ![Diagram 2]({{site.baseurl}}/assets/images/asherah/envelope.png) -Envelope encryption is a method for managing and storing key material alongside the data that the key encrypts. In this +Envelope encryption is a method for managing and storing key material alongside the data that the key encrypts. In this model, when you encrypt a data element, you take the key you used to encrypt the data, encrypt the **key** with a separate, -*higher order* key and then store the encrypted key in the same data structure as the encrypted data. In the diagram +*higher order* key and then store the encrypted key in the same data structure as the encrypted data. In the diagram above, the higher order key is used to encrypt a random string, the lower order key plaintext, creating a lower order key ciphertext. The "envelope" is then created with the lower order key ciphertext and the ciphertext you get by encrypting your data with the lower order key plaintext. The dotted line shows the inclusion of these elements in the envelope. -Envelope encryption can be useful for simplifying the management of the source of truth for which key is currently in play for -which data element (the envelope itself is the source of truth, rather than a separate metadata store) and provides a simple -basis from which a key hierarchy can be built. A very thoughtful description of this methodology can be found [on Google's +Envelope encryption can be useful for simplifying the management of the source of truth for which key is currently in play for +which data element (the envelope itself is the source of truth, rather than a separate metadata store) and provides a simple +basis from which a key hierarchy can be built. A very thoughtful description of this methodology can be found [on Google's Security Products page](https://cloud.google.com/kms/docs/envelope-encryption). The notion of higher and lower order keys can be generalized to a hierarchy or tree of keys: @@ -197,10 +198,10 @@ The notion of higher and lower order keys can be generalized to a hierarchy or t ![Diagram 3]({{site.baseurl}}/assets/images/asherah/key_hierarchy.png) The key hierarchy here has several tiers, each of which you can use to partition your data. A good example of a plausible -data partitioning scheme would be to assign each service in your infrastructure a separate SK. Then, assign each customer in -your service a separate IK. This would mean that every data element in the DRR (data row record) layer is encrypted -using a private key that even if recovered could never expose the data of another customer, or, any data at all from a -different service. +data partitioning scheme would be to assign each service in your infrastructure a separate SK. Then, assign each customer in +your service a separate IK. This would mean that every data element in the DRR (data row record) layer is encrypted +using a private key that even if recovered could never expose the data of another customer, or, any data at all from a +different service. In order to see how all of these pieces fit together, let's take a look a sequence diagram of a encrypting a payload using **Asherah**: @@ -216,9 +217,9 @@ stale in the cache. All of this complexity is already implemented for you in **A When we decided to address these problems internally, our first step was to evaluate alternative open source libraries that might help. There are a small number of well-supported projects that have some of the features we wanted, such as wrapping calls to cryptographic libraries and exposing pluggable key storage backends. Two of these were similar enough that we -evaluated them in depth: the -[AWS Application Encryption SDK](https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/introduction.html) and -[Google's Tink](https://github.com/google/tink). In each case, though we did see some overlap between our goals and the +evaluated them in depth: the +[AWS Application Encryption SDK](https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/introduction.html) and +[Google's Tink](https://github.com/google/tink). In each case, though we did see some overlap between our goals and the goals of these projects, our focus on key rotation, implementing a key hierarchy for blast radius reduction, a modular *CryptoPolicy* idea for managing aspects of the library's behavior and our thoughts on how to layer the library on top of a layer that would be an area for us to continue expanding our protections of in-memory cache data ended up moving us @@ -227,18 +228,18 @@ ways to contribute and work together on these problems. ## Conclusion -Implementing application layer encryption is a challenge to get right. **Asherah** makes it easy to incorporate an -advanced hierarchical key model with pluggable storage for key management ready-to-use, while never compromising -on memory protection. We want developers to focus on what drives their business domain and still maintain a high +Implementing application layer encryption is a challenge to get right. **Asherah** makes it easy to incorporate an +advanced hierarchical key model with pluggable storage for key management ready-to-use, while never compromising +on memory protection. We want developers to focus on what drives their business domain and still maintain a high security posture. The release of **Asherah** to the public is significant: it tackles a complex problem across many languages. Internally, our teams are continuously testing the security model provided to ensure that the ideas work and address real-world -problems. Further, this drives our progress in adding additional languages and features, which are already in the works. -Our roadmap includes plans to perform external security audits for each codebase as we evolve the project out of the incubator +problems. Further, this drives our progress in adding additional languages and features, which are already in the works. +Our roadmap includes plans to perform external security audits for each codebase as we evolve the project out of the incubator phase. We hope the rest of the community can benefit from the work that has been invested into this project. -Help us make it better! Let us know what you think! Head to [our repo](https://github.com/godaddy/asherah) to start learning +Help us make it better! Let us know what you think! Head to [our repo](https://github.com/godaddy/asherah) to start learning more. @@ -247,8 +248,8 @@ more. [Joey Wilhelm](https://www.linkedin.com/in/joewilhelm/) and [Lilia Abaibourova](https://www.linkedin.com/in/liliaparadis/) provided feedback on the Open Source documentation and contributed valuable -additions that make up the foundation of this effort and this post. [Eddie -Abrams](https://www.linkedin.com/in/zeroaltitude/) provided cheerleading +additions that make up the foundation of this effort and this post. [Eddie +Abrams](https://www.linkedin.com/in/zeroaltitude/) provided cheerleading support and bottomless caffeinated beverages on demand. diff --git a/_posts/2019-07-16-domain-vertial-classifier.md b/_posts/2019-07-16-domain-vertial-classifier.md index d4c0907..ce2155e 100644 --- a/_posts/2019-07-16-domain-vertial-classifier.md +++ b/_posts/2019-07-16-domain-vertial-classifier.md @@ -4,6 +4,7 @@ title: "A Simple CNN Classifier for Domain Name Industrial Market Segmentation" date: 2019-07-16 09:00:00 -0700 cover: /assets/images/domainclassifier/cover.jpg excerpt: A real-world example that develops a multi-class Convolutional Neural Network (CNN) Classifier that works well on very short texts -- domain names. Read more to see how we dealt with the noisiness in the data, clarified the project goal and improved the model iteratively. +canonical: https://godaddy.com/resources/news/domain-vertial-classifier authors: - name: Raina Tian title: Data Scientist @@ -15,13 +16,13 @@ authors: ## The Problem -Many analytical and technical tasks in today's world can employ a classifier: a good classifier generalizes patterns and uncovers important hidden features in a data set. Over the years, as we've been focusing on providing engaging user experiences for small business owners, developing a good classifier that can accurately determine a domain name's industrial market segmentation is in high demand. Since we have a large amount of users on-board from GoDaddy Website Builder, we are finally in a good place to solve this problem by leveraging neural network technologies. +Many analytical and technical tasks in today's world can employ a classifier: a good classifier generalizes patterns and uncovers important hidden features in a data set. Over the years, as we've been focusing on providing engaging user experiences for small business owners, developing a good classifier that can accurately determine a domain name's industrial market segmentation is in high demand. Since we have a large amount of users on-board from GoDaddy Website Builder, we are finally in a good place to solve this problem by leveraging neural network technologies. ### A Closer Look -Text classification is a common practice in the industry. For more information on text classification, this article written by [Mirończuk & Protasiewicz] gives an awesome review of the recent state-of-the-art elements of text classification. +Text classification is a common practice in the industry. For more information on text classification, this article written by [Mirończuk & Protasiewicz] gives an awesome review of the recent state-of-the-art elements of text classification. -In most real-life applications, if the classifier is performed on a larger set of labels, it always helps the model to predict better if a longer text is given. The requirements for our task though is a little different. Do we have a long text input to run the classifier on? Absolutely not, a domain name on average has only 3-5 words -- that's even shorter than most sentences. Do we want to cover as many categories as possible? Yes! The whole point is to get a comprehensive view of domain industrial market segmentation. I hope you are with me now, this task is challenging and ambitious. +In most real-life applications, if the classifier is performed on a larger set of labels, it always helps the model to predict better if a longer text is given. The requirements for our task though is a little different. Do we have a long text input to run the classifier on? Absolutely not, a domain name on average has only 3-5 words -- that's even shorter than most sentences. Do we want to cover as many categories as possible? Yes! The whole point is to get a comprehensive view of domain industrial market segmentation. I hope you are with me now, this task is challenging and ambitious. The domain name dataset presents a number of other challenges; here are the the problems we dealt with when building this model: @@ -48,8 +49,8 @@ Now, let's talk about [Problem 1](#problems). One important thing I learned from 2. More Accurate at A Lower Resolution - The categories in this collection are organized in a hierarchical structure that are often not mutually exclusive. For example, "Himalayan or Nepalese Restaurant" is a subcategory of "Restaurant". Conceivably, we could create millions of features, add additional layers and complexities in the model, to make it perform across all these categories. However, the goal is to teach the model human intuition. Not to mention a more profound model usually requires a much larger data set, takes more time and resources to train and performs much slower. Therefore, we decided to only look at the very top level of the hierarchical structure, and reorganize it to keep the top levels mutually exclusive with each other. - + The categories in this collection are organized in a hierarchical structure that are often not mutually exclusive. For example, "Himalayan or Nepalese Restaurant" is a subcategory of "Restaurant". Conceivably, we could create millions of features, add additional layers and complexities in the model, to make it perform across all these categories. However, the goal is to teach the model human intuition. Not to mention a more profound model usually requires a much larger data set, takes more time and resources to train and performs much slower. Therefore, we decided to only look at the very top level of the hierarchical structure, and reorganize it to keep the top levels mutually exclusive with each other. + The reorganized category set is what's finally used for training the model, we will call it the **"model label set"** in the rest of the article. The raw data will be relabeled according to the **"model label set"** , and it will be called **"relabeled data"**. @@ -58,27 +59,27 @@ Now, let's talk about [Problem 1](#problems). One important thing I learned from ### Woohoo, it comes with the label! -Moving on to [Problem 2](#problems). Before I dive into the noisiness of the data, I want to first emphasize the convenience in obtaining a curated data set. A supervised classification problem requires a well labeled data set. In many real-world practices, this means heavy manual labeling work needs to be done by a group of trained annotators. It is usually time consuming and expensive. Luckily with GoDaddy Website Builder, a newly acquired customer will potentially contribute to the date set by providing a category label that best describes their own site. Therefore we are able to associate a piece of short text such as site name, domain name, site description and title, etc, with the self-report category label. It is such a beautiful convenience that saves us tremendous amount of time and effort in data labeling. +Moving on to [Problem 2](#problems). Before I dive into the noisiness of the data, I want to first emphasize the convenience in obtaining a curated data set. A supervised classification problem requires a well labeled data set. In many real-world practices, this means heavy manual labeling work needs to be done by a group of trained annotators. It is usually time consuming and expensive. Luckily with GoDaddy Website Builder, a newly acquired customer will potentially contribute to the date set by providing a category label that best describes their own site. Therefore we are able to associate a piece of short text such as site name, domain name, site description and title, etc, with the self-report category label. It is such a beautiful convenience that saves us tremendous amount of time and effort in data labeling. ### But wait a second... -However, just like any self-reported data, the quality can be a concern. On top of that, unlike other self-reported data, the question we are trying to collect an answer for is very difficult -- as I mentioned earlier, the category collection has a very complicated hierarchical structure with over 1,600 choices in total. As you may imagine, a majority of people left their answers empty, or prematurely answered, without thinking through, in order to proceed to the next steps. Even when the answers were carefully filled out, inconsistencies are very likely to occur. +However, just like any self-reported data, the quality can be a concern. On top of that, unlike other self-reported data, the question we are trying to collect an answer for is very difficult -- as I mentioned earlier, the category collection has a very complicated hierarchical structure with over 1,600 choices in total. As you may imagine, a majority of people left their answers empty, or prematurely answered, without thinking through, in order to proceed to the next steps. Even when the answers were carefully filled out, inconsistencies are very likely to occur. + +Because of this we needed to massage the raw data to generate a higher quality relabeled data set to train the model on. The **"relabeled data"** can be obtained by repeating **Cleaning** and **Relabeling** steps iteratively, until it is converges to a stable stage. The set of categories covered by the relabeled data is the **"model label set"**. (see Figure below) + -Because of this we needed to massage the raw data to generate a higher quality relabeled data set to train the model on. The **"relabeled data"** can be obtained by repeating **Cleaning** and **Relabeling** steps iteratively, until it is converges to a stable stage. The set of categories covered by the relabeled data is the **"model label set"**. (see Figure below) - - ![Data Cleaning and Relabeling]({{site.baseurl}}/assets/images/domainclassifier/data-relabeling-cleaning.png) 1. **Cleaning** - Bad Data - + A model learns patterns in the data. If the data itself is of low quality, the model will pick up poor knowledge and perform badly. Therefore, using techniques to identify and remove invalid or low quality data from the training set is very crucial to obtaining an optimal model. We consider a training example as valid when the short text consists of at least one English dictionary word, as well as when the original category label could be mapped to a label from the **"model label set"**. - -2. **Relabeling** - Good Data with Disagreements - + +2. **Relabeling** - Good Data with Disagreements + Individuals can behave inconsistently: we observed this in so many examples where people use the same name for different industry categories; or choose different industry categories when the business names essentially describe the same thing. Relabeling the data completely manually is very tedious and error-prone and thus can easily introduce even more noise. Therefore, we chose to use the previous model to help assist the relabeling process. After we obtained a model from the last iteration, we will examine this model's prediction accuracies category by category, review wrong predictions with high confidence levels (see code below). This will allow us to quickly identify dominating mislabel trends in the previous **relabeled data**, so we can then correct and adjust to form a new set of **relabeled data** and **model label set** for the next iteration. - - + + ```python # Accuracy by Categories @@ -163,11 +164,11 @@ Here are a few examples from the model: getfitwithraina.net --> Fitness & Gyms helpwithlaw.org --> Lawyer & Attorney -This Domain Classifier is currently giving support to multiple teams within GoDaddy. For example, business intelligence team uses domain name industry category information to segment users and identify more industry-sensitive needs and potentials; domain name search team uses name industry category information to give industry related recommendations to improve user experiences. +This Domain Classifier is currently giving support to multiple teams within GoDaddy. For example, business intelligence team uses domain name industry category information to segment users and identify more industry-sensitive needs and potentials; domain name search team uses name industry category information to give industry related recommendations to improve user experiences. ## Conclusions -This article details our experiences in developing and deploying a powerful model that can classify a domain name into an industry category. I hope it would be helpful for those considering a related problem. As there are so many great examples online regarding CNN classifiers for all different kinds of interesting problems, my goal here is to share my experience on how to identify the unique challenges for a specific real-life problem and how to solve them efficiently using the limited resources available. Please don't hesitate to reach out to me on LinkedIn. Your thoughts, contributions and questions are always welcome! +This article details our experiences in developing and deploying a powerful model that can classify a domain name into an industry category. I hope it would be helpful for those considering a related problem. As there are so many great examples online regarding CNN classifiers for all different kinds of interesting problems, my goal here is to share my experience on how to identify the unique challenges for a specific real-life problem and how to solve them efficiently using the limited resources available. Please don't hesitate to reach out to me on LinkedIn. Your thoughts, contributions and questions are always welcome! ## Acknowledgements [Wenbo Wang](https://www.linkedin.com/in/iwenbowang/) contributed to the development of the model, [Navraj Pannu](https://www.linkedin.com/in/navraj-pannu-746359177/) provided valuable feedback on this blog post. diff --git a/_posts/2019-07-19-secrets-gpg-blackbox-docker-an-experimental-approach.md b/_posts/2019-07-19-secrets-gpg-blackbox-docker-an-experimental-approach.md index 7f4b294..a6f5d44 100644 --- a/_posts/2019-07-19-secrets-gpg-blackbox-docker-an-experimental-approach.md +++ b/_posts/2019-07-19-secrets-gpg-blackbox-docker-an-experimental-approach.md @@ -4,6 +4,7 @@ title: "Secrets, GPG, BlackBox, and Docker - an Experimental Approach" date: 2019-07-19 09:00:00 -0700 cover: /assets/images/secrets-gpg-blackbox-docker-an-experimental-approach/cover.png excerpt: This article describes an experimental approach on how Blackbox and Docker can be used in combination to manage secrets. +canonical: https://godaddy.com/resources/news/secrets-gpg-blackbox-docker-an-experimental-approach authors: - name: Mayank Jethva title: Software Engineer @@ -53,14 +54,14 @@ At its core, it relies on Gnu Privacy Guard (GPG) to encrypt/decrypt files using - A public key which you give to the....public. (i.e, other entities which you want to communicate with). - Assume Alice has generated a GPG public/private keypair, (often just denoted as a GPG keypair). - Alice can send a message to Bob by encrypting a message (i.e, plaintext) with Bob's public key. Only Bob can decrypt this message since he has the corresponding private key. -- Bob can send a message to Alice by encrypting a message using Alice's public key. Only Alice can decrypt this message because she has the corresponding private key. -- But the question remains about how Alice and Bob can verify the authenticity and integrity of the message. -- Alice can first construct the following: +- Bob can send a message to Alice by encrypting a message using Alice's public key. Only Alice can decrypt this message because she has the corresponding private key. +- But the question remains about how Alice and Bob can verify the authenticity and integrity of the message. +- Alice can first construct the following: 1. plain original message (i.e, "Hey, what's up?") 2. A signed secure hash of the original message using her private key 3. hashing algortihm details (i.e, SHA256) - Alice can combine all of these individual pieces and encrypt it with Bob's public key, then send the result to Bob. -- Once Bob decrypts the received message he can has three pieces. +- Once Bob decrypts the received message he can has three pieces. - Bob can verify the authenticity and integrity by first computing a secure hash based off the algorithm details sent by Alice of the plain original message, followed by using Alice's public key to verify the signed hash which was received. If the received signed hash and the computed hash match, the message was not tampered with. Also, since only Alice's private key could be used generate the received signed hash, Bob knows this message is actually from Alice. There are a few algorithms which support asymmetric encryption/decryption. A prominent one is RSA. We'll be using GPG to create RSA public/private key pairs which will then be used by Blackbox to handle encrypting/decrypting our secrets. The secrets are placed in version control. @@ -469,7 +470,7 @@ exec "$@" Run: -> `docker-compose -f docker-compose.yml build blackbox-containerized` +> `docker-compose -f docker-compose.yml build blackbox-containerized` > `docker-compose -f docker-compose.yml run blackbox-containerized` ### Conclusion diff --git a/_posts/2019-07-26-domain-name-valuation.md b/_posts/2019-07-26-domain-name-valuation.md index 5cce45a..3c512cc 100644 --- a/_posts/2019-07-26-domain-name-valuation.md +++ b/_posts/2019-07-26-domain-name-valuation.md @@ -4,6 +4,7 @@ title: "Using Deep Learning for Domain Name Valuation" date: 2019-07-26 09:00:00 -0700 cover: /assets/images/domain-name-valuation/neuralnetwork.png excerpt: How we built GoDaddy Domain Appraisals (GoValue) with deep neural networks and achieved accuracy better than a human expert. +canonical: https://godaddy.com/resources/news/domain-name-valuation authors: - name: Jason Ansel title: Senior Principal Engineer diff --git a/_posts/2019-08-13-kubernetes-gated-deployments.md b/_posts/2019-08-13-kubernetes-gated-deployments.md index 297d4d9..d091ef6 100644 --- a/_posts/2019-08-13-kubernetes-gated-deployments.md +++ b/_posts/2019-08-13-kubernetes-gated-deployments.md @@ -4,6 +4,7 @@ title: "Kubernetes Gated Deployments" date: 2019-08-13 09:00:00 -0700 cover: /assets/images/kubernetes-gated-deployments/cover.jpg excerpt: Kubernetes Gated Deployments is a Kubernetes controller that facilitates automatic regression testing and canary analysis on Kubernetes deployments. It is designed to augment existing deployment processes by analyzing key functionality and performance metrics associated with the application, and can detect and roll back changes if they cause undesirable behavior. +canonical: https://godaddy.com/resources/news/kubernetes-gated-deployments authors: - name: Steven Fu title: Software Engineer diff --git a/_posts/2019-09-03-doh-concerns.md b/_posts/2019-09-03-doh-concerns.md index d7202bf..9925acc 100644 --- a/_posts/2019-09-03-doh-concerns.md +++ b/_posts/2019-09-03-doh-concerns.md @@ -4,6 +4,7 @@ title: "DNS-over-HTTPS: Privacy and Security Concerns" date: 2019-09-04 09:00:00 -0700 cover: /assets/images/doh/DoH-blog-picture.png excerpt: New DNS privacy standards (DoH and DoT) have been published by the IETF. DNS also has had backwards-compatible security extensions added via DNSSEC, for several years. This post examines the browser-supported DoH and compares it to DoT, and examines privacy, security, and risks. +canonical: https://godaddy.com/resources/news/doh-concerns authors: - name: Brian Dickson title: Principal Software Engineer @@ -24,7 +25,7 @@ DNS was originally specified in the standards published by the [IETF](https://ww DNS has scaled extremely well, handling the growth of the Internet for the last 35 years. Client systems make use of intermediaries known as "resolvers", which do the bulk of the "look-up" work in DNS, and which implement caching to avoid duplication of look-ups. The actual data in DNS is hosted on what are known as "authoritative" servers. In addition, the namespace for DNS is hierarchical, with each level of the hierarchy only needing to serve/maintain information about the next level down in the "tree" of names. -When a browser wants to access a website, such as www.example.com, the browser asks the local operating system to look up that name. The local system then sends a request to one of the configured DNS resolver(s) and waits for the response which will be an IP address that the browser needs. The resolver checks its cache for helpful answers, and whenever it does not find the information it needs, it talks to the corresponding authoritative DNS servers to get what it requires. It starts with the "root" servers, who tell it where to find the "com" top-level domain servers. Then it asks the "com" servers about "example.com", and finally it asks the "example.com" servers about www.example.com. +When a browser wants to access a website, such as www.example.com, the browser asks the local operating system to look up that name. The local system then sends a request to one of the configured DNS resolver(s) and waits for the response which will be an IP address that the browser needs. The resolver checks its cache for helpful answers, and whenever it does not find the information it needs, it talks to the corresponding authoritative DNS servers to get what it requires. It starts with the "root" servers, who tell it where to find the "com" top-level domain servers. Then it asks the "com" servers about "example.com", and finally it asks the "example.com" servers about www.example.com. If a site is popular, the resolver will typically have the data in its cache and return it immediately. This, in turn, not only ensures the DNS system is not overloaded, but also significantly improves the performance for the World Wide Web. @@ -75,7 +76,7 @@ All four DNS transport protocols support DNSSEC in theory. In practice, the issu The main distinctions between the four _protocols_ are their channel security (visibility to on-path observers), and their interaction with network administrators for monitoring and blocking of DNS traffic. Both DoH and DoT provide channel security, since the DNS traffic itself is encrypted for both. All three of DNS/UDP, DNS/TCP, and DoT, are compatible with network monitoring and blocking at an IP level (address and port). However, DoT does encrypt the actual DNS queries, so only the existence of a DNS server would be visible to an observer (or possible to block). Since DoH uses the same transport as HTTPS, it is (by design) not compatible with the network administrator's monitoring and blocking, as there is no way to distinguish DoH from other HTTPS traffic. - + The other areas of comparison between DoH and DoT are the proposed deployment profiles, changes to the host "stack", and selection of DNS resolvers. DoT was developed as an "upgrade" to client-resolver DNS communications. It was intended to operate on a dedicated port, specifically so that both the client and server agreed that communication would be TLS-only (encrypted). This avoided some of the early problems when older protocols attempted to do "opportunistic TLS", via STARTTLS. By doing enforced TLS, many attack methods are impossible (e.g. downgrade attacks on the protocol). DoT was also intended to act as a drop-in replacement (or upgrade) to existing DNS clients at the system level, so that applications could continue interacting with the host operating system (OS) without requiring modifications. Thus, the whole application-OS-DNS part of the client stack was conceptually identical, including all of the security, deployment, and management mechanisms. While DoT was compatible with third party DNS resolvers, it was not specifically intended for them, and was effectively resolver-neutral. Theoretically, any DNS resolver could be upgraded to DoT, and likewise, any network operator could choose to allow or block DoT traffic to DNS resolvers to whom it did not want to allow access (e.g. so that internal DNS resolvers could be exclusively used, for enterprise DNS). @@ -117,9 +118,9 @@ There are several important changes resulting from the DoH functionality as impl "If this had not been a test, you would have been instructed where to tune in your area for news and official information." - [paraphrased EBS text](https://en.wikipedia.org/wiki/Emergency_Broadcast_System) Browsers are **_already implementing_** DoH, although they do so as a "disabled" feature configurable via advanced user preference. -This includes both Chrome (Google) and Firefox (Mozilla) browsers. +This includes both Chrome (Google) and Firefox (Mozilla) browsers. -For Firefox, the code is present on releases since Firefox 62, and can be enabled and configured using the "about:config" method (browser bar entry), and looking for fields that start with "network.trr": +For Firefox, the code is present on releases since Firefox 62, and can be enabled and configured using the "about:config" method (browser bar entry), and looking for fields that start with "network.trr": * "network.trr.mode" being set as "2" turns on DoH. * The default DoH resolver is CloudFlare, "https://mozilla.cloudflare-dns.com/dns-query" * DoH resolver is selected via the "network.trr.uri" being configured as such @@ -154,7 +155,7 @@ There are not a lot of good options for addressing any of these problems. * ...Particularly whenever there is an upgrade to browser software. * You may want to disable automatic updates * ...which in turn may result in a lowered security posture. - * Automatic updates are preferable since they avoid leaving known vulnerabilities unpatched. + * Automatic updates are preferable since they avoid leaving known vulnerabilities unpatched. * It may be advisable (or even necessary) to "black list" particular browers (or browser versions), if you need to have a stricter security posture. * Browsers which expose DNS configuration to unprivileged users: * May be more vulerable to malware that changes DNS settings. diff --git a/_posts/2019-09-27-PHP-malware-and-xor-encrypted-requests.md b/_posts/2019-09-27-PHP-malware-and-xor-encrypted-requests.md index a82b0c5..497a3f3 100644 --- a/_posts/2019-09-27-PHP-malware-and-xor-encrypted-requests.md +++ b/_posts/2019-09-27-PHP-malware-and-xor-encrypted-requests.md @@ -4,6 +4,7 @@ title: "PHP Malware and XOR Encrypted Requests" date: 2019-09-27 09:00:00 -0700 cover: /assets/images/php-xor/php-xor-cover.png excerpt: An analysis of the methods that malicious users use to implement XOR encryption to try and hide the data in HTTP requests sent to PHP malware existing on a compromised website or webserver. +canonical: https://godaddy.com/resources/news/php-malware-and-xor-encrypted-requests authors: - name: Luke Leal title: Security Engineer, Sucuri @@ -19,7 +20,7 @@ As a security engineer at [Sucuri](https://sucuri.net) (Sucuri was acquired by G We will mainly focus on the simple [XOR cipher](https://en.wikipedia.org/wiki/XOR_cipher), which itself is an application of the [exclusive disjunction (XOR)](https://en.wikipedia.org/wiki/Exclusive_or) logical operation. -Let’s examine how the **XOR** bitwise operator, ⨁, is used as a cipher additive for encryption/decryption: +Let’s examine how the **XOR** bitwise operator, ⨁, is used as a cipher additive for encryption/decryption: @@ -42,9 +43,9 @@ It is **important** to mention that PHP uses the **^** caret symbol for the **XO "If both operands for the [...] ^ operators are strings, then the operation will be performed on the ASCII values of the characters that make up the strings and the result will be a string." > -PHP malware using **XOR** will often use two **operands**, which are defined variables containing strings (e.g **$a** and **$b**). The strings used by one of the operand variables are the **[plaintext](https://en.wikipedia.org/wiki/Plaintext)** (in this example it is the malicious code) that we wish to encrypt, and the other operand variable string is what is known as a **pre-shared key**. +PHP malware using **XOR** will often use two **operands**, which are defined variables containing strings (e.g **$a** and **$b**). The strings used by one of the operand variables are the **[plaintext](https://en.wikipedia.org/wiki/Plaintext)** (in this example it is the malicious code) that we wish to encrypt, and the other operand variable string is what is known as a **pre-shared key**. -It’s also important to know that **XOR** operates as a **[symmetrical](https://en.wikipedia.org/wiki/Symmetric-key_algorithm)** form of encryption - which means that we can encrypt and decrypt using the same key. +It’s also important to know that **XOR** operates as a **[symmetrical](https://en.wikipedia.org/wiki/Symmetric-key_algorithm)** form of encryption - which means that we can encrypt and decrypt using the same key. Let us break this down with a simple PHP example: @@ -58,7 +59,7 @@ echo $c ; //result = EW@@ ``` -Remember that PHP’s **^** (**XOR**) bitwise operator converts each character to their corresponding ASCII value, which can be [checked with an ASCII table](https://simple.wikipedia.org/wiki/ASCII#/media/File:ASCII-Table-wide.svg). +Remember that PHP’s **^** (**XOR**) bitwise operator converts each character to their corresponding ASCII value, which can be [checked with an ASCII table](https://simple.wikipedia.org/wiki/ASCII#/media/File:ASCII-Table-wide.svg). A visual table of the **XOR** symmetric encryption: @@ -128,9 +129,9 @@ A visual table of the **XOR** symmetric encryption:
- -After undergoing **XOR** bitwise operation, we are left with the ASCII decimal values of **69 87 64 64** (or **EW@@** after we convert back to ASCII characters. The result is the [ciphertext](https://en.wikipedia.org/wiki/Ciphertext) (**C**), also known as the encrypted text. + +After undergoing **XOR** bitwise operation, we are left with the ASCII decimal values of **69 87 64 64** (or **EW@@** after we convert back to ASCII characters. The result is the [ciphertext](https://en.wikipedia.org/wiki/Ciphertext) (**C**), also known as the encrypted text. Remember, this is symmetrically encrypted — so the key (**B**) to decrypt it back into plaintext (**A**) is the same key that was used to encrypt it. @@ -143,7 +144,7 @@ In the case of PHP malware that utilizes XOR encryption for obfuscation, we prim **Only the ciphertext (C) in the infected file is stored. The malicious user submits the key (B) in a HTTP request to the PHP file which then decrypts the ciphertext (C) to plain text (A). The now decrypted ciphertext (A) is legible code and can then be executed by PHP.** - +
@@ -159,7 +160,7 @@ In the case of PHP malware that utilizes XOR encryption for obfuscation, we prim
C ciphertext in file
-The plaintext (**A**) string is malicious code that attackers then encrypt on their end by using **XOR** with a key (**B**) string, The result ends up being the ciphertext (**C**), which is unreadable as PHP code. +The plaintext (**A**) string is malicious code that attackers then encrypt on their end by using **XOR** with a key (**B**) string, The result ends up being the ciphertext (**C**), which is unreadable as PHP code. This ciphertext (**C**) is then added to a compromised websites’ file(s), along with some PHP code that uses some user-provided data (e.g via $_POST). This allows the hacker to send the value of the key and use the infected PHP file to decrypt the ciphertext (**C**) back to plaintext (**A**) (their malicious PHP code), then execute it using something like the eval() function. @@ -172,7 +173,7 @@ One problem is that, depending on the amount of encrypted malicious PHP code, th **Only the key (B) in the infected file is stored. The hacker then provides the ciphertext (C) in their request and it is XOR’ed with the existing key value in the file to form our plaintext (A) — which is then executed.** - +
@@ -189,7 +190,7 @@ One problem is that, depending on the amount of encrypted malicious PHP code, th -For this second method, let’s take a look at a recent backdoor that was found in a file named “**01f008ec.php**” within the root directory of an infected website. +For this second method, let’s take a look at a recent backdoor that was found in a file named “**01f008ec.php**” within the root directory of an infected website. PHP file formatted and segmented for clarity: @@ -214,9 +215,9 @@ The backdoor starts with a isset condition that mu ``` -Before the **XOR** operation, the file will use file_get_contents('php://input') to read raw data through **GET**/**POST** requests (XOR encoding may add special characters that can be broken if not transmitted raw) to the malicious file. It’s then split into an array using str_split and defined with the variable $part within a foreach loop. +Before the **XOR** operation, the file will use file_get_contents('php://input') to read raw data through **GET**/**POST** requests (XOR encoding may add special characters that can be broken if not transmitted raw) to the malicious file. It’s then split into an array using str_split and defined with the variable $part within a foreach loop. -We can see the malicious file’s code eventually evaluates code through a separate variable, eval($res) — this lets us know that the variable $res should contain the plaintext PHP code in order to be successfully executed. +We can see the malicious file’s code eventually evaluates code through a separate variable, eval($res) — this lets us know that the variable $res should contain the plaintext PHP code in order to be successfully executed. ``` @@ -227,11 +228,11 @@ We can see the malicious file’s code eventually evaluates code through a separ ``` -This is a great example of the properties of symmetrical cryptography. The hacker uses the key (**B**) value to create the unreadable ciphertext (**C**) on their end, then the ciphertext (**C**) is submitted to the infected file via GET/POST request where it will be **XOR**’ed with the key (**B**) once more. The result is a plaintext (**A**) value that contains the malicious PHP code to be executed by the eval() function. +This is a great example of the properties of symmetrical cryptography. The hacker uses the key (**B**) value to create the unreadable ciphertext (**C**) on their end, then the ciphertext (**C**) is submitted to the infected file via GET/POST request where it will be **XOR**’ed with the key (**B**) once more. The result is a plaintext (**A**) value that contains the malicious PHP code to be executed by the eval() function. -It’s important to note that the plaintext (**A**) gets evaluated, so you won’t ever see it — nor the ciphertext (**C**) being sent to the infected file (unless you are logging HTTP requests or otherwise inspecting HTTP packets). +It’s important to note that the plaintext (**A**) gets evaluated, so you won’t ever see it — nor the ciphertext (**C**) being sent to the infected file (unless you are logging HTTP requests or otherwise inspecting HTTP packets). -One advantage with using this method is that even if HTTP requests are being inspected (e.g. by a firewall), website owners wouldn’t see the plaintext PHP, which can aid in evading detection. Another advantage is that the infection in the file can be much smaller (~235 characters on one line) than the previous method, which can have thousands of characters — this can also help prevent administrators from identifying malicious changes to files. +One advantage with using this method is that even if HTTP requests are being inspected (e.g. by a firewall), website owners wouldn’t see the plaintext PHP, which can aid in evading detection. Another advantage is that the infection in the file can be much smaller (~235 characters on one line) than the previous method, which can have thousands of characters — this can also help prevent administrators from identifying malicious changes to files.

The Third Method

@@ -243,8 +244,8 @@ This third method doesn’t have the key value (**B**) hardcoded into the infect

Conclusion

-In conclusion, **XOR** bitwise operations in PHP malware can help hackers evade certain security controls, but their symmetric cryptography means that anyone that knows the pre-shared secret key can decrypt/encrypt using it. +In conclusion, **XOR** bitwise operations in PHP malware can help hackers evade certain security controls, but their symmetric cryptography means that anyone that knows the pre-shared secret key can decrypt/encrypt using it. -Users who believe that their site may be infected with a PHP backdoor can refer to our [hacked website guides](https://sucuri.net/guides/) for cleanup instructions, or [reach out to our remediation team](https://sucuri.net/website-malware-removal/) for assistance — we’re always happy to lend a hand. +Users who believe that their site may be infected with a PHP backdoor can refer to our [hacked website guides](https://sucuri.net/guides/) for cleanup instructions, or [reach out to our remediation team](https://sucuri.net/website-malware-removal/) for assistance — we’re always happy to lend a hand. -If you would like to receive email notifications for technical website security posts, subscribe for our [blog feed](https://info.sucuri.net/subscribe-to-security). +If you would like to receive email notifications for technical website security posts, subscribe for our [blog feed](https://info.sucuri.net/subscribe-to-security). diff --git a/_posts/2019-11-19-frontend-caching-quick-start.md b/_posts/2019-11-19-frontend-caching-quick-start.md index f4fcabc..1f52082 100644 --- a/_posts/2019-11-19-frontend-caching-quick-start.md +++ b/_posts/2019-11-19-frontend-caching-quick-start.md @@ -4,6 +4,7 @@ title: "Frontend Caching Quick Start" date: 2019-11-19 09:00:00 -0700 cover: /assets/images/2019-11-19-frontend-caching-quick-start/cover.jpg excerpt: This post provides a quick start guide to front end caching, helping developers create an optimal caching strategy. +canonical: https://godaddy.com/resources/news/frontend-caching-quick-start authors: - name: Mayank Jethva title: Software Engineer @@ -136,7 +137,7 @@ _On the other hand, if the file has newer content, the following flow between th > Side Note: The [HTTP 1.1 Specification](https://tools.ietf.org/html/rfc2616) states: "To mark a response as 'never expires', an origin server sends an > Expires date approximately one year from the time the response is > sent. HTTP/1.1 servers SHOULD NOT send Expires dates more than one -> year in the future." +> year in the future." > **Hence, the recommended `max-age` value for a resource which never expires is 1 year.** By setting `max-age: 31536000`, we're telling the client to cache it for up to 31536000 seconds, which is 1 year from the time of the request. - `healthcheck.html` diff --git a/_posts/2019-11-26-making-frameworks.md b/_posts/2019-11-26-making-frameworks.md index 398e8a1..3157818 100644 --- a/_posts/2019-11-26-making-frameworks.md +++ b/_posts/2019-11-26-making-frameworks.md @@ -6,6 +6,7 @@ cover: /assets/images/making-frameworks/cover.jpg excerpt: A look at how we develop Node.js apps today and how we can do it better using Gasket to quickly compose reusable elements of apps into feature-rich frameworks. +canonical: https://godaddy.com/resources/news/making-frameworks authors: - name: Andrew Gerard url: https://www.linkedin.com/in/andrewgerard/ diff --git a/_posts/2019-12-03-is-my-host-fast-yet.md b/_posts/2019-12-03-is-my-host-fast-yet.md index f42b4d9..5d34513 100644 --- a/_posts/2019-12-03-is-my-host-fast-yet.md +++ b/_posts/2019-12-03-is-my-host-fast-yet.md @@ -7,6 +7,7 @@ excerpt: You put some files on a server and users grab them; that’s hosting, r Sure, if you’re stuck in the 90’s. While there have been significant innovations in this space, it’s been largely uneventful in the last 10 years. This post aims to enlighten and educate on innovations in this industry. +canonical: https://godaddy.com/resources/news/is-my-host-fast-yet authors: - name: Aaron Silvas url: https://www.linkedin.com/in/aaron-silvas-5817626/ @@ -36,7 +37,7 @@ This report qualifies less than 200ms TTFB (Time To First Byte) as fast, 200-100 ## What is this wizardry? No tricks, just physics. Approximately [every 100km (~62mi) from data centers adds 1ms of latency](https://cloud.google.com/solutions/best-practices-compute-engine-region-selection) to client requests (RTT). Based on the worst case distance (half the circumference of earth), round trips can theoretically reach upwards of 200ms over fiber. This is before factoring in indirect routes, two to three round trips to establish connections (predominantly secured), and last-mile latencies from Internet Service Providers. If you’re serving all users from a single data center, parts of the world are likely to see roughly an overhead of 600-800ms simply due to distance. Add in the overhead of host response, and this can quickly reach 1000ms and beyond before your users begin to see something render. -If you’re still not sold on how critical TTFB is between your host and your client, let's look at this problem through another lens. Client latencies have a far greater (and linear) impact [compared to that of bandwidth](https://www.igvita.com/2012/07/19/latency-the-new-web-performance-bottleneck/). This means optimizing response times between client and host often will have a greater impact than reducing the size of your applications -- though naturally you should do both. Why then are we so obsessed with “fat pipes”? +If you’re still not sold on how critical TTFB is between your host and your client, let's look at this problem through another lens. Client latencies have a far greater (and linear) impact [compared to that of bandwidth](https://www.igvita.com/2012/07/19/latency-the-new-web-performance-bottleneck/). This means optimizing response times between client and host often will have a greater impact than reducing the size of your applications -- though naturally you should do both. Why then are we so obsessed with “fat pipes”? ## Custom stack @@ -45,7 +46,7 @@ Surely you didn’t think physics was the only hand at play here? After all, you ![Diagram](https://w3c.github.io/navigation-timing/timestamp-diagram.svg) -When we designed the hosting stack for GoDaddy Website Builder over 6 years ago, there were numerous off the shelf technologies we could have leveraged to get the job done, and done well. That was the easy and most obvious path. Instead we approached the problem as an opportunity to cater the solution specifically to the needs of users spanning the globe, and ultimately to provide a world class platform from which our customers could be proud to host their ideas and accelerate their ventures. Running a hosting platform on Node.js, which is JavaScript running in Google’s V8 engine, was fraught with skepticism. After all, fast JavaScript is an oxymoron, right? +When we designed the hosting stack for GoDaddy Website Builder over 6 years ago, there were numerous off the shelf technologies we could have leveraged to get the job done, and done well. That was the easy and most obvious path. Instead we approached the problem as an opportunity to cater the solution specifically to the needs of users spanning the globe, and ultimately to provide a world class platform from which our customers could be proud to host their ideas and accelerate their ventures. Running a hosting platform on Node.js, which is JavaScript running in Google’s V8 engine, was fraught with skepticism. After all, fast JavaScript is an oxymoron, right? Runtime language matters, especially for CPU bound operations. But when it comes to I/O bound tasks, which is often the case with hosting, your runtime plays an important but less significant role. Instead of chaining together general purpose technologies - that range from load balancers, to web servers, and caching - we approached the problem with a single cohesive stack that has full control over the quality experience throughout the process required to serve a customers request. This approach has allowed us to emphasize customer experience over throughput by performing all necessary computations in parallel. diff --git a/_posts/2019-12-05-securing-the-cloud.md b/_posts/2019-12-05-securing-the-cloud.md index 1bb54c3..e10faf1 100644 --- a/_posts/2019-12-05-securing-the-cloud.md +++ b/_posts/2019-12-05-securing-the-cloud.md @@ -9,6 +9,7 @@ excerpt: In March of 2018, GoDaddy and AWS announced a multi-year transition same on-premise tools and infrastructure to secure a cloud environment. To address this, we developed a serverless containerized framework on AWS to continuously detect and track security issues. +canonical: https://godaddy.com/resources/news/securing-the-cloud authors: - name: Greg Bailey title: Principal Software Engineer diff --git a/_posts/2019-12-10-Kernel-Bypass-Networking.md b/_posts/2019-12-10-Kernel-Bypass-Networking.md index 13b2c89..93da590 100644 --- a/_posts/2019-12-10-Kernel-Bypass-Networking.md +++ b/_posts/2019-12-10-Kernel-Bypass-Networking.md @@ -4,6 +4,7 @@ title: "Kernel-Bypass Networking" date: 2019-12-10 09:00:00 -0700 cover: /assets/images/kernel_bypass_networking.jpg excerpt: The DNS Team explored the possibility of using a software-based router instead of a hardware router. This post examines the reasons for using a software-based router with Kernel-Bypass Networking. +canonical: https://godaddy.com/resources/news/kernel-bypass-networking authors: - name: Benjamin Bowen title: Senior Development Manager @@ -32,7 +33,7 @@ For a server, this process is reversed. If that server is acting as a router, a Layer 4 (Transport): Coordinates data transfer between system and hosts, including error-checking and data recovery. Layer 3 (Network): Determines how data is sent to the receiving device. It's responsible for packet forwarding, routing, and addressing. Layer 2 (Data Link): Translates binary into signals and allows upper layers to access media. - Layer 1 (Physical): Transmits signals over media. Actual hardware sits at this layer. + Layer 1 (Physical): Transmits signals over media. Actual hardware sits at this layer. ``` The design of the BSD imposes constraints on how the data from a network source is handled by the OS. When a packet arrives from the NIC, it’s wrapped in a buffer object. That allocation can interfere with the dynamic memory allocator of the OS. For example, the buffer object can be forwarded between CPU cores in a multi-CPU system and accessed from multiple threads, which then requires locks for concurrent accesses. diff --git a/_posts/2020-01-10-better-prediction-interval-with-neural-network.md b/_posts/2020-01-10-better-prediction-interval-with-neural-network.md index 21dddfd..aafb362 100644 --- a/_posts/2020-01-10-better-prediction-interval-with-neural-network.md +++ b/_posts/2020-01-10-better-prediction-interval-with-neural-network.md @@ -4,6 +4,7 @@ title: "Better prediction intervals with Neural Networks" date: 2020-01-10 09:00:00 -0700 cover: /assets/images/better-prediction-interval-with-neural-network/cover.jpg excerpt: GoDaddy machine learning team presents Expanded Interval Minimization (EIM), a novel loss function to generate prediction intervals using neural networks. Prediction intervals are a valuable way of quantifying uncertainty in regression problems. Good prediction intervals should contain the actual value and have a small mean width of the bounds. We compare EIM to three published techniques and show EIM produces on average 1.37x tighter prediction intervals and in the worst case 1.06x tighter intervals across two large real-world datasets and varying coverage levels. +canonical: https://godaddy.com/resources/news/better-prediction-interval-with-neural-network authors: - name: Ying Yin Ting title: Senior Data Scientist @@ -15,17 +16,17 @@ authors: photo: https://avatars.githubusercontent.com/jansel --- -## Introduction +## Introduction GoDaddy has an automated service to give you the [appraisal for a domain name](https://www.godaddy.com/domain-value-appraisal) on the secondary market. The domain name value prediction is powered by a deep learning model that we described in a past [blog post](https://www.godaddy.com/engineering/2019/07/26/domain-name-valuation/). Now, another question that people may be curious about is: how confident is the model for that prediction? Can we quantify the uncertainty of the prediction? The answer is, YES! Using a prediction interval, we can quantify -the uncertainty of a regression problem. Instead of predicting a single value, one can predict a range of possible values, which we call a prediction interval. If the model is uncertain, it will give a -larger prediction interval; if the model is confident about the prediction, it will give a tighter prediction interval. Let’s take the domain appraisal problem as an example. +the uncertainty of a regression problem. Instead of predicting a single value, one can predict a range of possible values, which we call a prediction interval. If the model is uncertain, it will give a +larger prediction interval; if the model is confident about the prediction, it will give a tighter prediction interval. Let’s take the domain appraisal problem as an example. A normal regression model gives a single prediction for a domain name. `ThaiRestaurant.com` might be predicted -as $9,463. With prediction intervals, we instead have a range of predictions. For example, we might get a range from $9,000 to $10,000 for the same domain name. If the model is less certain about the prediction, the range can be larger, and the prediction interval may instead be $5,000 to $15,000. To dive more deeply into this topic, the GoDaddy machine learning team investigated this prediction interval problem and wrote a paper based on findings. In our paper, [Tight Prediction Intervals Using Expanded Interval Minimization](https://arxiv.org/abs/1806.11222), we were curious to see if there was a better way to develop a model that would output prediction intervals which are meaningful and precise. The following sections of this post summarize our approach and the results. +as $9,463. With prediction intervals, we instead have a range of predictions. For example, we might get a range from $9,000 to $10,000 for the same domain name. If the model is less certain about the prediction, the range can be larger, and the prediction interval may instead be $5,000 to $15,000. To dive more deeply into this topic, the GoDaddy machine learning team investigated this prediction interval problem and wrote a paper based on findings. In our paper, [Tight Prediction Intervals Using Expanded Interval Minimization](https://arxiv.org/abs/1806.11222), we were curious to see if there was a better way to develop a model that would output prediction intervals which are meaningful and precise. The following sections of this post summarize our approach and the results. ## Prediction Interval Evaluation @@ -47,7 +48,7 @@ PICP for a given model is 85%, we will shrink the prediction intervals to hit 80 the models we want to compare hit the same PICP target, we can compare the MPIW to check which model is the best. -## Traditional Techniques +## Traditional Techniques Before we dive into our new technique, let's first discuss existing techniques to construct a prediction interval model from related literature. The first and most common way to construct prediction intervals is to predict the variance directly. We can view the prediction interval to be constructed by a point of estimation plus and minus a multiple of the predicted standard deviation (which is the square root of the variance). If we are less certain, the variance will be higher; if we’re more certain, the variance will be smaller, which results in @@ -62,9 +63,9 @@ The fixed bounds method is a naive baseline so we can get a floor of performance The technique trains a regression model to predict the true value then adds and subtracts a fixed percentage of that prediction value. For example, if the prediction is 3,000 and the fixed percentage is set to 30%, we can then construct a prediction interval with the lower bound to be -3000-(30%*3000) and the upper bound to be 3000+(30%*3000). +3000-(30%*3000) and the upper bound to be 3000+(30%*3000). -### Maximum Likelihood Estimation +### Maximum Likelihood Estimation For the maximum likelihood method, we directly build two neural networks. The first neural network predicts the true value (like the $9,483 prediction for `thairestaurant.com` domain name), and the @@ -82,7 +83,7 @@ and calculate the variance of those predictions to be the estimation of how conf ### Quantile Regression Method -[Quantile regression](https://en.wikipedia.org/wiki/Quantile_regression) is a type of regression that predicts a specific quantile, such as the mean or 50%, of the data. +[Quantile regression](https://en.wikipedia.org/wiki/Quantile_regression) is a type of regression that predicts a specific quantile, such as the mean or 50%, of the data. Using quantile regression, we can construct prediction intervals by training two models to output different quantile of the prediction and thus construct an interval. For example, we @@ -93,9 +94,9 @@ outputs. ## Propose technique - Expanded Interval Minimization In our paper, we present a new way to build a prediction interval model: Expanded Interval -Minimization (EIM), a novel loss function for generating prediction intervals using neural networks. -For every neural network model, we need to provide a loss function that we want it to learn to minimize. -For example, for a regression problem, the loss function can be the mean square error - the mean of the squared difference between predictions and actual values. For the prediction interval problem, we want to hit the target PICP while minimizing the MPIW. We use the minibatch as a noisy estimate of the population PICP and MPIW. +Minimization (EIM), a novel loss function for generating prediction intervals using neural networks. +For every neural network model, we need to provide a loss function that we want it to learn to minimize. +For example, for a regression problem, the loss function can be the mean square error - the mean of the squared difference between predictions and actual values. For the prediction interval problem, we want to hit the target PICP while minimizing the MPIW. We use the minibatch as a noisy estimate of the population PICP and MPIW. For every minibatch of data that feeds into the neural network, we scale the prediction interval to hit the fixed and given PICP like the way we discussed in the [prediction interval evaluation section](#prediction-interval-evaluation). After hitting the given PICP, we can @@ -284,4 +285,4 @@ art by developing novel machine learning techniques like EIM. The proposed Expan Minimization (EIM) method for prediction intervals has significantly better results than the existing techniques. While comparing to the next best technique, EIM produces 1.06x to 1.26x tighter prediction intervals given each target PICP (70%, 80%, and 90%). We hope that others will be able to -use EIM to generate tighter prediction intervals and apply this technique to broader use cases. \ No newline at end of file +use EIM to generate tighter prediction intervals and apply this technique to broader use cases. diff --git a/_posts/2020-01-27-b-root.md b/_posts/2020-01-27-b-root.md index 706f082..b388fd3 100644 --- a/_posts/2020-01-27-b-root.md +++ b/_posts/2020-01-27-b-root.md @@ -4,6 +4,7 @@ title: "GoDaddy Hosts DNS B Root Instances" date: 2020-01-27 09:00:00 -0700 cover: /assets/images/b-root/Root%20Servers%20in%20the%20World%20-%20Google%20My%20Maps.png excerpt: The root of the Domain Name System (DNS Root) is managed by 13 independent organizations, known as "A" through "M". This post discusses GoDaddy's partnership with one of those 13, known as the B Root, to augment their global presence. +canonical: https://godaddy.com/resources/news/b-root authors: - name: Brian Dickson title: Principal Software Engineer @@ -15,7 +16,7 @@ authors: This blog explains the DNS Root Servers and how GoDaddy is contributing by hosting B Root instances. It also provides a background of the Domain Name System (DNS) as a hierarchy. Lastly, this blog discusses the root of that hierarchy to help provide context for the various root server identities and independent operations, and to give a frame of reference for the discussion about the B Root servers. ## What is the Domain Name System? -The Domain Name System (DNS) is both a protocol, and a distributed database of information concerning hosts and services on the public Internet. +The Domain Name System (DNS) is both a protocol, and a distributed database of information concerning hosts and services on the public Internet. DNS has scaled extremely well, handling the growth of the Internet for the last 35 years. The database side of DNS is a hierarchical system, forming a decentralized database of Internet information. Conceptually, it forms a tree with a single root, where every node below the root has a label with only local significance. diff --git a/_posts/2020-05-06-godaddy-splitio-collaboration.md b/_posts/2020-05-06-godaddy-splitio-collaboration.md index 05aa01d..3fa1d46 100644 --- a/_posts/2020-05-06-godaddy-splitio-collaboration.md +++ b/_posts/2020-05-06-godaddy-splitio-collaboration.md @@ -4,6 +4,7 @@ title: "GoDaddy x Split.io" date: 2020-05-06 09:00:00 -0700 cover: /assets/images/godaddy-splitio-collaboration/High-five_L.png excerpt: GoDaddy and Split.io recently joined forces to design and build a set of experimentation tools that enables A/B testing without a performance penalty. +canonical: https://godaddy.com/resources/news/godaddy-splitio-collaboration authors: - name: Celia Waggoner title: Software Engineering Manager diff --git a/_posts/2020-05-12-experimentation-practices.md b/_posts/2020-05-12-experimentation-practices.md index 8e19dac..c60304f 100644 --- a/_posts/2020-05-12-experimentation-practices.md +++ b/_posts/2020-05-12-experimentation-practices.md @@ -4,6 +4,7 @@ title: "Four tips for developing sound experimentation practices" date: 2020-05-13 08:00:00 -0800 cover: /assets/images/experimentation-practices/person-holding-blue-ballpoint-pen-on-white-notebook-669610.jpg excerpt: Best practices and lessons learned for teams moving towards experiment-driven development. +canonical: https://godaddy.com/resources/news/experimentation-practices authors: - name: Ellen O'Connor title: Senior Software Engineer diff --git a/_posts/2021-02-11-gasket-api-preset.md b/_posts/2021-02-11-gasket-api-preset.md index b29449f..243d607 100644 --- a/_posts/2021-02-11-gasket-api-preset.md +++ b/_posts/2021-02-11-gasket-api-preset.md @@ -6,12 +6,12 @@ cover: /assets/images/gasket-api-preset/cover.jpg excerpt: Create a simple API with Node.js and Express, using the Gasket API Preset. We'll also get a glimpse into generating API documentation with the new Gasket Swagger Plugin! options: - full-bleed-cover +canonical: https://godaddy.com/resources/news/gasket-api-preset authors: - name: Kawika Bader title: Senior Software Engineer url: https://www.linkedin.com/in/kawikabader photo: /assets/images/gasket-api-preset/kawikabader.jpg -canonical: https://blog.gasket.dev/api-preset/ --- In this article, we'll learn how to create a simple API with Node.js and Express, using the Gasket API Preset. We'll also get a glimpse into generating API documentation with the new Gasket Swagger Plugin! @@ -247,7 +247,7 @@ Here we are defining a route using the `GET` method. This route will log a messa ### Starting Up The API -To start the API, run `npm run local` from the root of the `./fingerstache-coffee` directory (you may need to navigate to the project root directory, before running): +To start the API, run `npm run local` from the root of the `./fingerstache-coffee` directory (you may need to navigate to the project root directory, before running): ```bash cd fingerstache-coffee ``` @@ -390,7 +390,7 @@ module.exports = (app) => { Here we have documented the route using a JSDoc-style format that the swagger-jsdoc module can parse and render. More information on the various doc parameters can be found on the [swagger-jsdoc github](https://github.com/Surnet/swagger-jsdoc/blob/master/docs/GETTING-STARTED.md). -Now if we stop and restart the API +Now if we stop and restart the API ```bash control+c npm run local diff --git a/_posts/2021-05-07-godaddys-journey-to-the-cloud.md b/_posts/2021-05-07-godaddys-journey-to-the-cloud.md index 26c02ef..4b18695 100644 --- a/_posts/2021-05-07-godaddys-journey-to-the-cloud.md +++ b/_posts/2021-05-07-godaddys-journey-to-the-cloud.md @@ -6,6 +6,7 @@ cover: /assets/images/godaddys-journey-to-the-cloud/cover.jpg excerpt: In this blog post, we share information about GoDaddy's cloud journey, which began in early 2018 when we announced our partnership with AWS. Specifically, we describe the GoDaddy Public Cloud Portal, an application used to onboard engineering teams to AWS. options: - full-bleed-cover +canonical: https://godaddy.com/resources/news/godaddys-journey-to-the-cloud authors: - name: Jared Beauchamp title: Senior Software Engineer Manager @@ -41,37 +42,37 @@ The GoDaddy Public Cloud Portal initiative set forth the following goals: Meeting these goals is critical to the success of the GoDaddy cloud adoption journey. -### Supporting the move to cloud and needed cultural change +### Supporting the move to cloud and needed cultural change -This story about our journey here at GoDaddy is not just one of a technical feat. At the highest levels of the GoDaddy management team, there was an understanding that realizing the benefits and value from moving to the cloud will require cultural change. The key story communicated by the executive management team is the spirit of working together – we're in this 'move to the public cloud' together. Moreover, GoDaddy needs the participation, collaboration, and input/feedback from every engineer in the company to help navigate and optimize the journey. +This story about our journey here at GoDaddy is not just one of a technical feat. At the highest levels of the GoDaddy management team, there was an understanding that realizing the benefits and value from moving to the cloud will require cultural change. The key story communicated by the executive management team is the spirit of working together – we're in this 'move to the public cloud' together. Moreover, GoDaddy needs the participation, collaboration, and input/feedback from every engineer in the company to help navigate and optimize the journey. -GoDaddy leadership has always professed there is no book they could pick up at the company store that says 'this is right way to take GoDaddy to the public cloud'. 'The book does not exist – they are actually writing it every day'. It takes the shared experience of everybody that is involved in the migration to the cloud to get it right. GoDaddy needed their engineers experiencing it all together, and providing feedback and input on refinements from their perspective. The Application Services team (GoDaddy Cloud Center of Excellence) is in place to be the nexus of that experience across all teams, so the organization overall can collect and expand experience drawn from other teams as they onboard over time. +GoDaddy leadership has always professed there is no book they could pick up at the company store that says 'this is right way to take GoDaddy to the public cloud'. 'The book does not exist – they are actually writing it every day'. It takes the shared experience of everybody that is involved in the migration to the cloud to get it right. GoDaddy needed their engineers experiencing it all together, and providing feedback and input on refinements from their perspective. The Application Services team (GoDaddy Cloud Center of Excellence) is in place to be the nexus of that experience across all teams, so the organization overall can collect and expand experience drawn from other teams as they onboard over time. -The culture and spirit of working together shows up in the GoDaddy initiative process, which is not just a centralized team defining the right solution with all others teams following. Instead, the process is where they gather a group of thought leaders across the company and agree on what the problem is – once they agree on the problem, they define and agree on what 'done' looks like (e.g. the form of done) – is it an application, a process, documentation, etc... Once they have this definition complete they ask the thought leaders to recommend and offer up a list of 8-10 contributors in the company that will contribute to the initiative to get to an actual answer. They've use this process, for example, for CDN architecture/design, defining tiers of applications and thus what level of security test should be applied to that tier of application and at what frequency, application encryption library innovations for teams, and creating the Must-Have's and Should-Do's list for raising the bar on engineering rigor that is discussed later. +The culture and spirit of working together shows up in the GoDaddy initiative process, which is not just a centralized team defining the right solution with all others teams following. Instead, the process is where they gather a group of thought leaders across the company and agree on what the problem is – once they agree on the problem, they define and agree on what 'done' looks like (e.g. the form of done) – is it an application, a process, documentation, etc... Once they have this definition complete they ask the thought leaders to recommend and offer up a list of 8-10 contributors in the company that will contribute to the initiative to get to an actual answer. They've use this process, for example, for CDN architecture/design, defining tiers of applications and thus what level of security test should be applied to that tier of application and at what frequency, application encryption library innovations for teams, and creating the Must-Have's and Should-Do's list for raising the bar on engineering rigor that is discussed later. -The culture of this contribution model was key for GoDaddy to leverage the expertise and diversity of their organization and to drive the speed of innovation they were looking to achieve with moving to the cloud. There's a well-defined pipeline for contributing. If someone can see a standard infrastructure-as-code architecture component or deployment product that can be better, or a new feature they want to have within an infrastructure product definition they're building with, they just submit a PR. There's a really good pipeline defined on how to get that into production for their team and thus to benefit all the other GoDaddy DevOps teams going forward. +The culture of this contribution model was key for GoDaddy to leverage the expertise and diversity of their organization and to drive the speed of innovation they were looking to achieve with moving to the cloud. There's a well-defined pipeline for contributing. If someone can see a standard infrastructure-as-code architecture component or deployment product that can be better, or a new feature they want to have within an infrastructure product definition they're building with, they just submit a PR. There's a really good pipeline defined on how to get that into production for their team and thus to benefit all the other GoDaddy DevOps teams going forward. -There's no doubt that there's more responsibility on teams moving to the AWS cloud than existed with the on-premises environment. In the new culture the company is asking teams to operate their own product, asking them to secure their own product, to be responsible for their own budget. This is definitely a lot of responsibility in the new 'DevSecFinOps' multi-responsibility model for teams. The culture needed to be supportive and make sure that teams are empowered to make their own decisions. Through the group-think type of Initiative Process we just discussed, GoDaddy has automated many of the things that used to be done on-premises manually, in some cases where they never had the ability to automate before. So, while there is more breadth of responsibility and things to do in the new world of cloud, there is much more automation across the board to offload each team also. +There's no doubt that there's more responsibility on teams moving to the AWS cloud than existed with the on-premises environment. In the new culture the company is asking teams to operate their own product, asking them to secure their own product, to be responsible for their own budget. This is definitely a lot of responsibility in the new 'DevSecFinOps' multi-responsibility model for teams. The culture needed to be supportive and make sure that teams are empowered to make their own decisions. Through the group-think type of Initiative Process we just discussed, GoDaddy has automated many of the things that used to be done on-premises manually, in some cases where they never had the ability to automate before. So, while there is more breadth of responsibility and things to do in the new world of cloud, there is much more automation across the board to offload each team also. -Communicating a strong vision and explaining the motivation for moving to the cloud is a key component of cultural change management for the company. GoDaddy communicated their motivation in going to the AWS cloud – the 3 major goals for raising the bar on customer experience and product excellence. +Communicating a strong vision and explaining the motivation for moving to the cloud is a key component of cultural change management for the company. GoDaddy communicated their motivation in going to the AWS cloud – the 3 major goals for raising the bar on customer experience and product excellence. -- Increased speed of delivery: get the features and the products to our customers faster -- Increased application performance: getting the applications closer to our customer, as well as freeing up time for our engineers and giving them better tools so that we can actually accelerate our own applications. -- Increased reliability & availability: This is the biggest goal that will drive architectural changes in the company as we move to the cloud. We need to build architectures that can withstand an entire AWS region going out, for example, and we stay up and running with no customer impact. The cloud allows new approaches that have not been available to us on-premises. +- Increased speed of delivery: get the features and the products to our customers faster +- Increased application performance: getting the applications closer to our customer, as well as freeing up time for our engineers and giving them better tools so that we can actually accelerate our own applications. +- Increased reliability & availability: This is the biggest goal that will drive architectural changes in the company as we move to the cloud. We need to build architectures that can withstand an entire AWS region going out, for example, and we stay up and running with no customer impact. The cloud allows new approaches that have not been available to us on-premises. -To manage this change, GoDaddy worked to achieve each of these goals while observing and adhering to necessary constraints to the business as they proceeded. The thought was to achieve the goals AND conform to the constraints at the same time – constraints related to Security, Application Architecture, Operational Readiness, Budget, and Compliance & Privacy. E.g. they will achieve increased speed of delivery while adhering to the budget, and while adhering to the necessary security standards, etc… +To manage this change, GoDaddy worked to achieve each of these goals while observing and adhering to necessary constraints to the business as they proceeded. The thought was to achieve the goals AND conform to the constraints at the same time – constraints related to Security, Application Architecture, Operational Readiness, Budget, and Compliance & Privacy. E.g. they will achieve increased speed of delivery while adhering to the budget, and while adhering to the necessary security standards, etc… -So how do they measure and make sure they get there? Within the defined constraints? The Must-Have's and Should-Do's list defines the bar. The Cloud Readiness Review implements the validation and approval against the bar. Then their standard S-P-A-Q metrics for measuring Speed, Performance, Availability, and Quality provide on-going metric measurement and reporting for achievement in production. We'll talk more about this in the next section. +So how do they measure and make sure they get there? Within the defined constraints? The Must-Have's and Should-Do's list defines the bar. The Cloud Readiness Review implements the validation and approval against the bar. Then their standard S-P-A-Q metrics for measuring Speed, Performance, Availability, and Quality provide on-going metric measurement and reporting for achievement in production. We'll talk more about this in the next section. -GoDaddy has applied the same Initiative approach to the definition and implementation of the S-P-A-Q metrics, including key members from various teams to improve and ratify the metrics over time. For example, measuring Availability has evolved, as originally many teams were measuring this differently making it difficult to compare and contrast team results. E.g. measuring by the second, the minute, from inside the datacenter, from outside the datacenter, if a ping failed once the service is down, or the ping must fail multiple times in succession for the service to be slated as down. There used to be a lack of consistency, and now the KPI dashboard shows consistently measured metrics across all services through the collaboration of initiative teams. +GoDaddy has applied the same Initiative approach to the definition and implementation of the S-P-A-Q metrics, including key members from various teams to improve and ratify the metrics over time. For example, measuring Availability has evolved, as originally many teams were measuring this differently making it difficult to compare and contrast team results. E.g. measuring by the second, the minute, from inside the datacenter, from outside the datacenter, if a ping failed once the service is down, or the ping must fail multiple times in succession for the service to be slated as down. There used to be a lack of consistency, and now the KPI dashboard shows consistently measured metrics across all services through the collaboration of initiative teams. -Finally, they've set the terminology of the company to be consistent, with agreed definition of terms, and ultimately raising the bar for compliance across all teams driving overall service improvement for customers. All this is built on top of standard engineering practices, well-defined, communicated, and understood by all engineering teams. The final 'hurrah' is achieving increased Speed of Delivery. For this they focused on good CI-CD engineering processes and tools for starters. Team Engineering practices are defined and measured during onboarding through the cloud portal managed Cloud Readiness Review process. +Finally, they've set the terminology of the company to be consistent, with agreed definition of terms, and ultimately raising the bar for compliance across all teams driving overall service improvement for customers. All this is built on top of standard engineering practices, well-defined, communicated, and understood by all engineering teams. The final 'hurrah' is achieving increased Speed of Delivery. For this they focused on good CI-CD engineering processes and tools for starters. Team Engineering practices are defined and measured during onboarding through the cloud portal managed Cloud Readiness Review process. -The needed cultural change has been a journey; all GoDaddy colleagues are clearly in this together. It continues to require the entire GoDaddy community to participate and make this move to the cloud successful and optimal – to achieve the outcomes the company strives to deliver to customers. GoDaddy is a work in progress; and continues to iterate toward the company vision. +The needed cultural change has been a journey; all GoDaddy colleagues are clearly in this together. It continues to require the entire GoDaddy community to participate and make this move to the cloud successful and optimal – to achieve the outcomes the company strives to deliver to customers. GoDaddy is a work in progress; and continues to iterate toward the company vision. -### Genesis of the Public Cloud Portal +### Genesis of the Public Cloud Portal -So, how is GoDaddy realizing the benefits and objectives of moving to the cloud, scaling across 1000’s of employees, 100’s of scrum teams, and creating an experience that accelerates engineering teams in serving their customers? Managing the deployment standards, setting up the cloud foundation and landing zones, organizing and collecting on-boarding information, tracking and reporting; is all too much to handle manually while supporting the scale and agility that is required. Enter the GoDaddy Public Cloud Portal, with the mission to deliver a seamless one stop shop for GoDaddy developers to learn, on-board, and manage their product and services in the cloud. Let's dive into the feature/functions brought together in the GoDaddy public cloud portal in support of development teams in the next section. +So, how is GoDaddy realizing the benefits and objectives of moving to the cloud, scaling across 1000’s of employees, 100’s of scrum teams, and creating an experience that accelerates engineering teams in serving their customers? Managing the deployment standards, setting up the cloud foundation and landing zones, organizing and collecting on-boarding information, tracking and reporting; is all too much to handle manually while supporting the scale and agility that is required. Enter the GoDaddy Public Cloud Portal, with the mission to deliver a seamless one stop shop for GoDaddy developers to learn, on-board, and manage their product and services in the cloud. Let's dive into the feature/functions brought together in the GoDaddy public cloud portal in support of development teams in the next section. ## Public Cloud Portal ecosystem diff --git a/_posts/2021-05-07-serverless-aws-servicecatalog-plugin.md b/_posts/2021-05-07-serverless-aws-servicecatalog-plugin.md index a6a7154..40111fb 100644 --- a/_posts/2021-05-07-serverless-aws-servicecatalog-plugin.md +++ b/_posts/2021-05-07-serverless-aws-servicecatalog-plugin.md @@ -6,6 +6,7 @@ cover: /assets/images/serverless-aws-servicecatalog-plugin/cover.jpg excerpt: The serverless-aws-servicecatalog plugin provides developers with the power of Serverless deployments while allowing companies to maintain governance over AWS resources by using AWS Service Catalog. This is one step on the path to unlock the power of no-managed-resource applications for enterprise uses. By taking advantage of higher order abstractions over CloudFormation, such as Service Catalog, teams working with self-created and managed custom products can also make use of Serverless to develop, maintain and deploy these innovative new runtimes. options: - full-bleed-cover +canonical: https://godaddy.com/resources/news/serverless-aws-servicecatalog-plugin authors: - name: John Smey title: Senior Software Engineer @@ -14,7 +15,7 @@ authors: The serverless design pattern made possible by Amazon API Gateway and AWS Lambda allows developers to build and run applications without having to maintain any persistent infrastructure. Serverless applications are becoming increasingly popular as more organizations move to cloud providers. Some of the core use cases for serverless applications include: auto-scaling web-sites and APIs, event processing and streaming, image or video processing and CICD. -The serverless architecture is a good fit for applications that fit the following criteria: +The serverless architecture is a good fit for applications that fit the following criteria: 1. You want the cloud provider to manage resources, availability and scaling 2. Some per-request latency isn’t a problem 3. You only want to pay for resources in active use @@ -32,7 +33,7 @@ In AWS, the design principle of "infrastructure as code" is achieved by using Cl ## Serverless and Service Catalog -**Serverless** generates a CloudFormation template which is used to deploy the AWS products required by a Serverless application. This will not work for developers that are restricted to using only Service Catalog products. +**Serverless** generates a CloudFormation template which is used to deploy the AWS products required by a Serverless application. This will not work for developers that are restricted to using only Service Catalog products. To solve this problem, GoDaddy and AWS joined forces to create the [serverless-aws-servicecatalog](https://github.com/godaddy/serverless-aws-servicecatalog) plugin. This plugin allows an AWS admin to deploy a custom serverless product in Service Catalog. This product ID is then added to the Serverless configuration file. The plugin overrides the Serverless package:compileFunctions hook and inserts the CloudFormation templates from the specified Service Catalog product. @@ -48,12 +49,12 @@ Create a Serverless Service Catalog CloudFormation template to create the Servic 1. Install the Serverless framework ``` - npm install -g serverless + npm install -g serverless ``` 2. Create a Serverless project and add the plugin. - ``` + ``` serverless create --template aws-nodejs npm install serverless-aws-servicecatalog ``` @@ -75,7 +76,7 @@ Create a Serverless Service Catalog CloudFormation template to create the Servic 4. Run the following to create the stack. ``` - serverless deploy + serverless deploy ``` Developers can then write CICD tools to encapsulate stages of this process and automate the deployment and management of their serverless services. @@ -83,4 +84,4 @@ Developers can then write CICD tools to encapsulate stages of this process and a ### Summary The serverless-aws-servicecatalog plugin provides developers with the power of Serverless deployments while allowing companies to maintain governance over AWS resources by using AWS Service Catalog. This is one step on the path to unlock the power of no-managed-resource applications for enterprise uses. By taking advantage of higher order abstractions over CloudFormation, such as Service Catalog, teams working with self-created and managed custom products can also make use of Serverless to develop, maintain and deploy these innovative new runtimes. - \ No newline at end of file + diff --git a/_posts/2021-06-09-android-animated-pride-rainbow.md b/_posts/2021-06-09-android-animated-pride-rainbow.md index 67f1f39..872e35b 100644 --- a/_posts/2021-06-09-android-animated-pride-rainbow.md +++ b/_posts/2021-06-09-android-animated-pride-rainbow.md @@ -6,6 +6,7 @@ cover: /assets/images/android-animated-pride-rainbow/cover.jpg excerpt: In this post, we take a deep dive look into how we created a Pride-themed easter egg inside the Over Android App. The easter egg is a rainbow bounding box that is drawn using OpenGL. We take a look at how to setup the required code in order to get OpenGL to render the rainbow box on screen and learn a bit more about OpenGL along the way! options: - full-bleed-cover +canonical: https://godaddy.com/resources/news/android-animated-pride-rainbow authors: - name: Rebecca Franks title: Principal Software Engineer @@ -101,7 +102,7 @@ version: ```kotlin val surfaceView = findViewById(R.id.surface) -// We use OpenGLES 3.0 because it has more features than 2.0 +// We use OpenGLES 3.0 because it has more features than 2.0 // It has a couple of nice newer features like - vertex array objects etc - more information [here](https://stackoverflow.com/a/38163130) surfaceView.setEGLContextClientVersion(3) surfaceView.setRenderer(RainbowBoxRenderer()) @@ -142,8 +143,8 @@ stored in the same array and we will inform GL of the stride length (ie the numb a standard box, this is the geometry we would use: ```kotlin -// each line represents a "vertex" that GL will read. In this example there are 4 vertices to draw a square. -// The first two values represent X,Y coords. The second two represent the corresponding coordinates of a texture/bitmap that should be loaded up at that point. +// each line represents a "vertex" that GL will read. In this example there are 4 vertices to draw a square. +// The first two values represent X,Y coords. The second two represent the corresponding coordinates of a texture/bitmap that should be loaded up at that point. val attributeValues = floatArrayOf( -1.0f, 1.0f, 0.0f, 1.0f, -1.0f, -1.0f, 0.0f, 0.0f, @@ -311,9 +312,9 @@ const vec4 COLORS[7] = vec4[]( out vec4 oColor; void main() { - // vProgress is interpolated between 0 - 1 by the vertex shader. + // vProgress is interpolated between 0 - 1 by the vertex shader. // We multiply by uTimeOffset to give the animation over time. - // We multiply uTimeOffset by 16 to make the speed of the animation a bit faster, and 0.125 to stretch out the gradient a bit more. + // We multiply uTimeOffset by 16 to make the speed of the animation a bit faster, and 0.125 to stretch out the gradient a bit more. float progress = (vProgress + uTimeOffset * 16.0) * 0.125; float colorIndex = mod(uDashCount * progress / 4.0, 6.0); // There are actually 6 colors, not 7 vec4 currentColor = COLORS[int(floor(colorIndex))]; @@ -394,10 +395,10 @@ At GoDaddy everyone is welcome, we strongly believe that diverse teams build bet representation from all different groups. So for pride month, we are highlighting the GoDaddy United (LGBQT+) group, which is designed to ensure that within the walls of our company, everyone is able to be themselves, feels safe and is informed with regard to issues relating to the Lesbian, Gay, Bisexual, Transgender and Queer communities. For more -information about this and other initiatives - head to our website [here](https://careers.godaddy.com/diversity). +information about this and other initiatives - head to our website [here](https://careers.godaddy.com/diversity). We love adding little bits of user delight and we hope this puts a bit of a smile on people’s faces when they encounter it in the app. If you have any questions or feedback, feel free to reach out to me on Twitter [@riggaroo](https://twitter.com/riggaroo)! -![Pride Banner]({{site.baseurl}}/assets/images/android-animated-pride-rainbow/pride_2021_banner.png) \ No newline at end of file +![Pride Banner]({{site.baseurl}}/assets/images/android-animated-pride-rainbow/pride_2021_banner.png) diff --git a/_posts/2021-06-14-test-harness.md b/_posts/2021-06-14-test-harness.md index c9e601c..4c96c72 100644 --- a/_posts/2021-06-14-test-harness.md +++ b/_posts/2021-06-14-test-harness.md @@ -6,6 +6,7 @@ cover: /assets/images/test-harness/cover.jpg excerpt: Writing APIs around SDKs in multiple languages proves to be an effective method of implementing a language-agnostic integration test suite. options: - full-bleed-cover +canonical: https://godaddy.com/resources/news/test-harness authors: - name: Joe Bergeron title: Software Engineer III diff --git a/_posts/2021-07-07-radpack-your-dependencies.md b/_posts/2021-07-07-radpack-your-dependencies.md index 98377d0..8406f30 100644 --- a/_posts/2021-07-07-radpack-your-dependencies.md +++ b/_posts/2021-07-07-radpack-your-dependencies.md @@ -6,6 +6,7 @@ cover: /assets/images/radpack-your-dependencies/cover.jpg excerpt: Bundlers like Webpack do a great job at providing a toolset needed to deliver an optimal out-of-the-box delivery solution. Loaders on the other hand are focused on delivering only the requested assets, as they are needed, and have a much higher cacheability. Radpack offers the best of both worlds. options: - full-bleed-cover +canonical: https://godaddy.com/resources/news/radpack-your-dependencies authors: - name: Aaron Silvas title: Sr Principal Architect diff --git a/_posts/2021-08-26-tartufo.md b/_posts/2021-08-26-tartufo.md index 67bda31..3991f4c 100644 --- a/_posts/2021-08-26-tartufo.md +++ b/_posts/2021-08-26-tartufo.md @@ -6,6 +6,7 @@ cover: /assets/images/tartufo/cover.jpg excerpt: In our never-ending quest to improve the security of our code and systems, GoDaddy has been tackling the task of removing all secrets and credentials from all code across the company. Read the story of the process, tools, and challenges we have faced in this journey. options: - full-bleed-cover +canonical: https://godaddy.com/resources/news/tartufo authors: - name: Joey Wilhelm title: Sr Software Engineer diff --git a/_posts/2021-09-29-godaddy-response-csam.md b/_posts/2021-09-29-godaddy-response-csam.md index 467e930..ec07650 100644 --- a/_posts/2021-09-29-godaddy-response-csam.md +++ b/_posts/2021-09-29-godaddy-response-csam.md @@ -7,6 +7,7 @@ options: - full-bleed-cover excerpt: GoDaddy takes an unsparing stance when it comes to hosting CSAM (Child Sexual Abuse Material). We use many resources available to detect, remove, and report hosted CSAM on our platform(s). We do not allow content that sexually exploits or endangers minors. In this blog post we discuss how we’ve helped to protect children – within our systems and beyond – and how we watch for and fight this abhorrent crime. keywords: CSAM Investigation, CSEAI, Child Safety, Child Abuse, Grooming +canonical: https://godaddy.com/resources/news/godaddy-response-csam authors: - name: Akshay Grover title: Mgr - Software Development diff --git a/_posts/2021-11-08-android-state-management-mvi.md b/_posts/2021-11-08-android-state-management-mvi.md index b4ca566..07c78d8 100644 --- a/_posts/2021-11-08-android-state-management-mvi.md +++ b/_posts/2021-11-08-android-state-management-mvi.md @@ -5,8 +5,9 @@ date: 2021-11-05 00:00:00 -0700 cover: /assets/images/android-state-management-mvi/cover.jpg options: - full-bleed-cover -excerpt: In this post, we will look at the journey that the GoDaddy Studio Android team took with how UI state is managed across the app. We will cover MVVM and how it caused issues, the initial MVI implementation and the issues we faced. Finally, we will look at how we landed on using Spotify’s Mobius Framework for managing state. +excerpt: In this post, we will look at the journey that the GoDaddy Studio Android team took with how UI state is managed across the app. We will cover MVVM and how it caused issues, the initial MVI implementation and the issues we faced. Finally, we will look at how we landed on using Spotify’s Mobius Framework for managing state. keywords: Android, Architecture, MVI, MVVM, State machine, Mobius +canonical: https://godaddy.com/resources/news/android-state-management-mvi authors: - name: Rebecca Franks title: Software Development Engineer IV @@ -24,11 +25,11 @@ The GoDaddy Studio Android app began with a fresh codebase around the end of 201 As time went on, we started facing issues with the MVI approach. The way we implemented it, unfortunately, allowed for race conditions and caused a lot of headaches for the team. It was at this point that we decided to investigate using existing frameworks for unidirectional data flow and see if they could solve the problems we were seeing. After evaluating a few, we settled on using the [Spotify Mobius Framework](https://github.com/spotify/mobius) for various reasons which we will dive into in this blog post. -## State Management +## State Management -Let’s step back a bit and learn about the issues we faced with our approaches over the past couple of years. What does it mean when we talk about State Management for UI? State management refers to keeping track of how a user interface should look and how it should react to different user inputs. On a complex screen with plenty of buttons, gestures and text input - managing the state is a full-time concern. +Let’s step back a bit and learn about the issues we faced with our approaches over the past couple of years. What does it mean when we talk about State Management for UI? State management refers to keeping track of how a user interface should look and how it should react to different user inputs. On a complex screen with plenty of buttons, gestures and text input - managing the state is a full-time concern. -Take this screen for example: +Take this screen for example: {:refdef: style="text-align: center;"} ![]({{site.baseurl}}/assets/images/android-state-management-mvi/OverallState.gif){: width="250" } {: refdef} @@ -39,15 +40,15 @@ There are plenty of concerns that need to be managed and tracked on this screen - What is the currently selected tool on screen? - What are the layers and properties that should be displayed in this project? -Not only is there the current state of what should be rendered on screen, but there are many different interactions that need to mutate this state - scaling gestures, button taps, colour changes, etc. +Not only is there the current state of what should be rendered on screen, but there are many different interactions that need to mutate this state - scaling gestures, button taps, colour changes, etc. ## MVVM and the issues we faced 😣 A couple of years ago, the Android community began advocating for using **some kind of architecture,** as we all realized that placing all your logic inside the Activity, was a perfect recipe for the messiest spaghetti bolognaise you could imagine (delicious but not very elegant). -First came **MVP** (Model-View-Presenter) and there were great benefits to applying this architecture to our apps: we got testability, separation of concerns and the ability to re-use logic on other screens. +First came **MVP** (Model-View-Presenter) and there were great benefits to applying this architecture to our apps: we got testability, separation of concerns and the ability to re-use logic on other screens. -Then came along **MVVM** (Model-View-ViewModel), which solved a lot of issues that MVP had: mostly the ability to have the ViewModel unaware of the View or who was listening to any changes that are happening. +Then came along **MVVM** (Model-View-ViewModel), which solved a lot of issues that MVP had: mostly the ability to have the ViewModel unaware of the View or who was listening to any changes that are happening. We were using **MVVM** in certain places in our app, but we very quickly found issues with how we approached using MVVM. @@ -79,7 +80,7 @@ class ProjectEditViewModel: ViewModel() { This example looks good - loading and setting a project. But think of this scenario: a user clicks on the “**create project**" button, and it fails, populating the “**error**" `LiveData`. They click "create project" again and it succeeds but the error value is still populated. Now we have a loaded project and an error screen shown at the same time, what is the correct state to show to a user? -Some might solve this problem by saying: “You need to reset the `error.value` inside `onSuccess`“ and whilst that could help improve things, the very nature of these `LiveData` observables being separate objects, is the **bigger** issue here. Let’s go a bit deeper into why multiple observables can be problematic. +Some might solve this problem by saying: “You need to reset the `error.value` inside `onSuccess`“ and whilst that could help improve things, the very nature of these `LiveData` observables being separate objects, is the **bigger** issue here. Let’s go a bit deeper into why multiple observables can be problematic. This is how we would typically observe these `LiveData` objects for changes inside the Fragment: @@ -100,7 +101,7 @@ class ProjectFragment: Fragment() { } ``` -From this example, it is difficult to know what the UI of the screen will look like at any point because each `LiveData` observable can emit a new state at **any point in time**. If we start adding new functions to our `ViewModel` that emit new loading or error states, would you be able to describe what the UI will look like at a single point in time? +From this example, it is difficult to know what the UI of the screen will look like at any point because each `LiveData` observable can emit a new state at **any point in time**. If we start adding new functions to our `ViewModel` that emit new loading or error states, would you be able to describe what the UI will look like at a single point in time? This can result in a [race condition](https://en.wikipedia.org/wiki/Race_condition) since these separate observables can emit state changes independently. What would the state look like if an error is emitted but there is currently a project loaded? @@ -117,7 +118,7 @@ There are several drawbacks when using this approach of MVVM with separate LiveD - There is no single snapshot to be able to recreate the UI from easily. - This also then begs the question, are we handling all the potential cases here? -One way to potentially improve on this behaviour, is to use data classes that contain the state information in one class and expose only a singular LiveData object to the UI. This would help solve the issue of having multiple observables emitting different state that we’d need to keep track of, and it's a building block of how MVI can work, as we will explore next. +One way to potentially improve on this behaviour, is to use data classes that contain the state information in one class and expose only a singular LiveData object to the UI. This would help solve the issue of having multiple observables emitting different state that we’d need to keep track of, and it's a building block of how MVI can work, as we will explore next. # MVI / Unidirectional Data Flow @@ -149,7 +150,7 @@ As mentioned at the start of this post, we took inspiration for our MVI implemen {: refdef} -With this mechanism in mind, the code looked as follows. The `EditorAction` ‘s were fired from the UI, and the `EditorState` is an example of what the single state of the UI could look like: +With this mechanism in mind, the code looked as follows. The `EditorAction` ‘s were fired from the UI, and the `EditorState` is an example of what the single state of the UI could look like: ```kotlin sealed class EditorAction { @@ -227,7 +228,7 @@ class EditorViewModel : ViewModel() { } ``` -Then, inside our fragment, we were now observing only the **single state** that was emitted from the `ViewModel`, and not a bunch of observables as shown inside the MVVM example. +Then, inside our fragment, we were now observing only the **single state** that was emitted from the `ViewModel`, and not a bunch of observables as shown inside the MVVM example. ```kotlin class MainActivity : Fragment() { @@ -253,15 +254,15 @@ class MainActivity : Fragment() { This approach that we initially went with had many benefits and advantages over multiple `LiveData` observables as we saw in the MVVM example earlier. Some of the advantages include: -- We are now avoiding issues with different states being emitted from different `LiveData` objects that can emit at any time. We have a **single state** controlling this. -- Using the data class `.copy()` mechanism to update parts of the state that have changed, helped to not lose data along the way, as you only change parts of the object that need to change. -- Our UI layer is minimal now - it only fires actions that have happened and there is not any logic sitting inside the view. -- There is a clearer separation of concerns, Actions, Processors, Reducers controlling the UI state. Nothing external is changing the state. This makes it easier to test and ensure it is doing the correct thing. +- We are now avoiding issues with different states being emitted from different `LiveData` objects that can emit at any time. We have a **single state** controlling this. +- Using the data class `.copy()` mechanism to update parts of the state that have changed, helped to not lose data along the way, as you only change parts of the object that need to change. +- Our UI layer is minimal now - it only fires actions that have happened and there is not any logic sitting inside the view. +- There is a clearer separation of concerns, Actions, Processors, Reducers controlling the UI state. Nothing external is changing the state. This makes it easier to test and ensure it is doing the correct thing. - There is also a clear pattern that keeps the code clean and allowed us to separate actions into different files, so we didn’t have all the logic inside the ViewModel file either. (We have over 200 unique actions that can happen on a single screen) -But not everything worked as expected with this MVI setup either! 😩 Although for the most part things seemed to work well on the surface, we started observing strange crashes in production and a few race conditions along the way. +But not everything worked as expected with this MVI setup either! 😩 Although for the most part things seemed to work well on the surface, we started observing strange crashes in production and a few race conditions along the way. -Let’s talk a bit about the issues we faced with this particular MVI approach. (Did we mention, it has been a long journey?😅) +Let’s talk a bit about the issues we faced with this particular MVI approach. (Did we mention, it has been a long journey?😅) ## Issues with our MVI implementation @@ -303,7 +304,7 @@ Our MVI implementation was fine when events were sent sporadically and were proc Having identified all the shortcomings of our current implementation, we came up with a new set of requirements for the new implementation: -- Unidirectional flow of data: actions in, state out - we knew this was a good assumption and wanted to stick with it. +- Unidirectional flow of data: actions in, state out - we knew this was a good assumption and wanted to stick with it. - Concurrency - Synchronized state access - we couldn’t let multiple concurrent jobs read and write state as they pleased - Non-blocking events processing - receiving events had to be as fast as possible, however, processing could be much slower (i.e. requiring slow I/O operations, like network calls) and should not block the processing of new events @@ -403,16 +404,16 @@ After digging through and learning the ins and outs of Mobius, we decided to sta With every new framework or architecture decision, there is likely never going to be the solution that fits everyone, and adopting something like the Mobius Framework also comes with these disadvantages to adopting: -- It requires quite a bit of boilerplate code to set up. We’ve solved this using shared code templates to generate most of the classes we need. -- New framework for people to learn. Any new engineer needs to spend a bit of time learning about the framework, working through some examples and implementing a feature with it. +- It requires quite a bit of boilerplate code to set up. We’ve solved this using shared code templates to generate most of the classes we need. +- New framework for people to learn. Any new engineer needs to spend a bit of time learning about the framework, working through some examples and implementing a feature with it. # Summary 🎉 -We’ve learnt a lot over the past couple of years, and I don’t doubt that we won’t learn more in the future. +We’ve learnt a lot over the past couple of years, and I don’t doubt that we won’t learn more in the future. -Right now, we have been successfully using Spotify’s Mobius Framework in our app for the past year and have migrated our largest piece of work - the Canvas Editor to use it too. After switching to Mobius in the Canvas Editor, we observed fewer bugs and race conditions with state were resolved. The level of testing and separation of concerns we’ve achieved using MVI over the years has improved our code quality and eliminated the “god class” activity/view Model that we’ve seen in the past. +Right now, we have been successfully using Spotify’s Mobius Framework in our app for the past year and have migrated our largest piece of work - the Canvas Editor to use it too. After switching to Mobius in the Canvas Editor, we observed fewer bugs and race conditions with state were resolved. The level of testing and separation of concerns we’ve achieved using MVI over the years has improved our code quality and eliminated the “god class” activity/view Model that we’ve seen in the past. -We hope this write-up of our journey can help you think a bit more about state management and the potential for race conditions with complicated state. +We hope this write-up of our journey can help you think a bit more about state management and the potential for race conditions with complicated state. Have any questions or feedback? Feel free to reach out to [Rebecca](https://twitter.com/riggaroo), [Kamil](https://www.linkedin.com/in/kamilslesinski/) or [@GoDaddyOSS](https://twitter.com/godaddyoss)! diff --git a/_posts/2022-01-06-tartufo-v3.md b/_posts/2022-01-06-tartufo-v3.md index b38bfd5..439c617 100644 --- a/_posts/2022-01-06-tartufo-v3.md +++ b/_posts/2022-01-06-tartufo-v3.md @@ -7,6 +7,7 @@ options: - full-bleed-cover excerpt: We have prepared a brand new release of our credential scanning tool, tartufo, packed full of new features, massive performance gains, and improvements to the user experience! keywords: tartufo, secrets, secret scanning, security +canonical: https://godaddy.com/resources/news/tartufo-v3 authors: - name: Joey Wilhelm title: Sr Software Engineer diff --git a/_posts/2022-01-10-running-puma-in-aws.md b/_posts/2022-01-10-running-puma-in-aws.md index 4ec1b76..24bfd45 100644 --- a/_posts/2022-01-10-running-puma-in-aws.md +++ b/_posts/2022-01-10-running-puma-in-aws.md @@ -7,6 +7,7 @@ options: - full-bleed-cover excerpt: In the past couple of years, we have been on our journey to the cloud migrating our web services to AWS. In this blog post, we share what we learned about deploying Puma web server to AWS by migrating our email delivery service written in Ruby to AWS. keywords: Puma, Ruby, AWS, ALB, Security +canonical: https://godaddy.com/resources/news/running-puma-in-aws authors: - name: Dalibor Nasevic title: Sr. Principal Software Engineer diff --git a/_posts/2022-01-28-raising-the-bar-for-devsecops-beyond.md b/_posts/2022-01-28-raising-the-bar-for-devsecops-beyond.md index cecd5ab..c3b2deb 100644 --- a/_posts/2022-01-28-raising-the-bar-for-devsecops-beyond.md +++ b/_posts/2022-01-28-raising-the-bar-for-devsecops-beyond.md @@ -7,6 +7,7 @@ options: - full-bleed-cover excerpt: DevSecFinOps (Development + Security + Finance + Operations) means developers are accountable for more and more disciplines related to the services they build. Organizations can ease this burden by building internal developer platforms that prioritize the developer experience. keywords: DevOps, DevSecFinOps, Automation, Developer Platform +canonical: https://godaddy.com/resources/news/raising-the-bar-for-devsecops-beyond authors: - name: Keith Bartholomew title: Software Engineer diff --git a/_posts/2022-03-22-fluent-bit-plugins-in-go.md b/_posts/2022-03-22-fluent-bit-plugins-in-go.md index 489303d..d6a1c3a 100644 --- a/_posts/2022-03-22-fluent-bit-plugins-in-go.md +++ b/_posts/2022-03-22-fluent-bit-plugins-in-go.md @@ -7,6 +7,7 @@ options: - full-bleed-cover excerpt: Fluent Bit is a powerful tool for log management, filtering and exporting. Learn how you can extend its functionality even further using Go to build output plugins keywords: fluent-bit, logging, go, plugins +canonical: https://godaddy.com/resources/news/fluent-bit-plugins-in-go authors: - name: Todd Kennedy title: Principal Software Developer diff --git a/_posts/2022-05-27-study-group-framework.md b/_posts/2022-05-27-study-group-framework.md index ccb2b5f..6e21679 100644 --- a/_posts/2022-05-27-study-group-framework.md +++ b/_posts/2022-05-27-study-group-framework.md @@ -7,6 +7,7 @@ options: - full-bleed-cover excerpt: How to keep learning new skills while meeting deadlines at work and maintaining a good work-life balance. keywords: skill development, certification, procrastination, conflicting priorities +canonical: https://godaddy.com/resources/news/study-group-framework authors: - name: Mayur Jain title: Director of Engineering diff --git a/_posts/2022-07-28-websites-and-marketing-case-study.md b/_posts/2022-07-28-websites-and-marketing-case-study.md index 2dcaeb7..9550a7c 100644 --- a/_posts/2022-07-28-websites-and-marketing-case-study.md +++ b/_posts/2022-07-28-websites-and-marketing-case-study.md @@ -7,6 +7,7 @@ options: - full-bleed-cover excerpt: Case study on how we improved performance and speed in Website + Marketing. keywords: website, performance, speed +canonical: https://godaddy.com/resources/news/websites-and-marketing-case-study authors: - name: Simon Le Parc title: Sr. Mgr - Software Development diff --git a/_posts/2022-09-12-rails-bulk-insert-mysql.md b/_posts/2022-09-12-rails-bulk-insert-mysql.md index 2fc4066..a4a74c8 100644 --- a/_posts/2022-09-12-rails-bulk-insert-mysql.md +++ b/_posts/2022-09-12-rails-bulk-insert-mysql.md @@ -7,6 +7,7 @@ options: - full-bleed-cover excerpt: This blog post explores how we optimized our Email Batch API by using Rails bulk inserts with MySQL and how to calculate the auto-incrementing IDs for records, given MySQL does not support a RETURNING clause. keywords: rails, bulk insert, mysql, +canonical: https://godaddy.com/resources/news/rails-bulk-insert-mysql authors: - name: Dalibor Nasevic title: Sr. Principal Software Engineer diff --git a/_posts/2022-09-19-sample-size-calculator.md b/_posts/2022-09-19-sample-size-calculator.md index 280ec3a..2fa6813 100644 --- a/_posts/2022-09-19-sample-size-calculator.md +++ b/_posts/2022-09-19-sample-size-calculator.md @@ -8,6 +8,7 @@ options: usemathjax: true excerpt: GoDaddy's Hivemind team built a Python sample size calculator that handles a wide variety of experiment metric types and multiple testing scenarios. keywords: sample size calculator, A/B test, false discovery rate adjustment, Python +canonical: https://godaddy.com/resources/news/sample-size-calculator authors: - name: Xinyu Zou title: Data Scientist-Experimentation Platform diff --git a/_posts/2022-09-29-track-aws-resources-using-globaltechregistry.md b/_posts/2022-09-29-track-aws-resources-using-globaltechregistry.md index 8190874..499af1f 100644 --- a/_posts/2022-09-29-track-aws-resources-using-globaltechregistry.md +++ b/_posts/2022-09-29-track-aws-resources-using-globaltechregistry.md @@ -7,6 +7,7 @@ options: - full-bleed-cover excerpt: Global Tech Registry (GTR) is a metadata registry service that provides insight into GoDaddy's AWS Cloud deployments. By combining metadata from various sources with active AWS health events, GTR is able to immediately discover the impact on GoDaddy products and notify the relevant teams with impacted services. keywords: globaltechregistry, gtr, cloud operations, goDaddy metadata registry, Observability, aws service outage, aws resources discovery, aws health events, aws config, aws lambda, sns, sqs +canonical: https://godaddy.com/resources/news/track-aws-resources-using-globaltechregistry authors: - name: Jan-Erik Carlsen title: Principal Software Engineer diff --git a/_posts/2022-10-25-chasing-runaway-memory-usage-in-istio-sidecars.md b/_posts/2022-10-25-chasing-runaway-memory-usage-in-istio-sidecars.md index 31196a2..b663384 100644 --- a/_posts/2022-10-25-chasing-runaway-memory-usage-in-istio-sidecars.md +++ b/_posts/2022-10-25-chasing-runaway-memory-usage-in-istio-sidecars.md @@ -7,6 +7,7 @@ options: - full-bleed-cover excerpt: When a service on our Kubernetes cluster started using too much memory, I thought the service itself was to blame. After a long debugging journey, I found that a misconfigured Istio sidecar proxy was actually to blame. keywords: kubernetes, istio, service mesh, cloudwatch, prometheus, bug +canonical: https://godaddy.com/resources/news/chasing-runaway-memory-usage-in-istio-sidecars authors: - name: Keith Bartholomew title: Senior Software Engineer diff --git a/_posts/2022-10-31-optimized-hosting.md b/_posts/2022-10-31-optimized-hosting.md index 625a92c..beeb460 100644 --- a/_posts/2022-10-31-optimized-hosting.md +++ b/_posts/2022-10-31-optimized-hosting.md @@ -7,6 +7,7 @@ options: - full-bleed-cover excerpt: Did you ever wonder how GoDaddy runs your website? Or how one might go about hosting millions of websites? How about running a hundred thousand Virtual Private Servers (VPS)? keywords: webhosting, vps, virtual private server, hosting, vserver, scale, reliability, availability +canonical: https://godaddy.com/resources/news/optimized-hosting authors: - name: Robert Breker title: Senior Director, Software Development diff --git a/_posts/2022-12-01-data-mesh.md b/_posts/2022-12-01-data-mesh.md index 1a70829..3141784 100644 --- a/_posts/2022-12-01-data-mesh.md +++ b/_posts/2022-12-01-data-mesh.md @@ -7,6 +7,7 @@ options: - full-bleed-cover excerpt: In this post, we discuss how GoDaddy uses AWS Lake Formation to simplify security management and data governance at scale, and enable data as a service (DaaS) supporting organization-wide data accessibility with cross-account data sharing using a data mesh architecture. keywords: Amazon Simple Storage Service (S3), Analytics, AWS Big Data, AWS Glue, AWS Lake Formation +canonical: https://godaddy.com/resources/news/data-mesh authors: - name: Ankit Jhalaria title: Director of Engineering at GoDaddy diff --git a/_posts/2022-12-15-search-data-engineering.md b/_posts/2022-12-15-search-data-engineering.md index 27dec57..b6f2e82 100644 --- a/_posts/2022-12-15-search-data-engineering.md +++ b/_posts/2022-12-15-search-data-engineering.md @@ -7,6 +7,7 @@ options: - full-bleed-cover excerpt: GoDaddy provides best in class search experience for people looking for domain names. Search data engineering is the critical plumbing behind the seamless search experience on the GoDaddy search page. In this blog post, we provide some insights into the inner workings of the data pipelines by delving into the architecture and the implementation of the search data infrastructure. keywords: Data Pipeline DataEngineering, Data Engineering Jobs, Data Mart, Domain search, Machine Learning, AWS, Airflow, EMR +canonical: https://godaddy.com/resources/news/search-data-engineering authors: - name: Ankush Prasad title: Principal Software Engineer diff --git a/_posts/2023-02-03-mental-health-in-software-industry.md b/_posts/2023-02-03-mental-health-in-software-industry.md index bf531ca..b80b837 100644 --- a/_posts/2023-02-03-mental-health-in-software-industry.md +++ b/_posts/2023-02-03-mental-health-in-software-industry.md @@ -5,19 +5,20 @@ date: 2023-02-03 cover: /assets/images/mental-health-repost/cover-photo.jpg options: - full-bleed-cover -excerpt: If you’ve ever been on a job hunt in the software industry, you’ve likely seen a company list “work-life balance” as a benefit. The fact that companies need to tell prospective employees that they’ll allow them to “have a life” is telling, but often missed in this discussion of balance, is mental health. +excerpt: If you’ve ever been on a job hunt in the software industry, you’ve likely seen a company list “work-life balance” as a benefit. The fact that companies need to tell prospective employees that they’ll allow them to “have a life” is telling, but often missed in this discussion of balance, is mental health. keywords: software, mental health, support +canonical: https://godaddy.com/resources/news/mental-health-in-software-industry authors: - name: Shane Parker title: Sr Software Development Engineer url: https://www.shnparker.com/ photo: /assets/images/sparker.jpeg --- - -If you’ve ever been on a job hunt in the software industry, you’ve likely seen a company list “work-life balance” as a benefit. The fact that companies need to tell prospective employees that they’ll allow them to “have a life” is telling, but often missed in this discussion of balance, is mental health. -On his personal blog, Senior Software Development Engineer, Shane Parker, describes his journey to realization that the high stress of software development led to mental health issues that needed treatment. He highlights some of the common stressors that many people in software experience (and may not even know are stressors) and some of the changes he incorporated in his life to help improve his mental well-being. +If you’ve ever been on a job hunt in the software industry, you’ve likely seen a company list “work-life balance” as a benefit. The fact that companies need to tell prospective employees that they’ll allow them to “have a life” is telling, but often missed in this discussion of balance, is mental health. -We commend Shane for having the courage to share such a personal and important topic. His story serves as a reminder that mental health should be a priority for everyone in the software industry. If you're struggling with stress or mental health issues, know that help is available and it's important to seek support. +On his personal blog, Senior Software Development Engineer, Shane Parker, describes his journey to realization that the high stress of software development led to mental health issues that needed treatment. He highlights some of the common stressors that many people in software experience (and may not even know are stressors) and some of the changes he incorporated in his life to help improve his mental well-being. + +We commend Shane for having the courage to share such a personal and important topic. His story serves as a reminder that mental health should be a priority for everyone in the software industry. If you're struggling with stress or mental health issues, know that help is available and it's important to seek support. Check out Shane's full post on his site here, diff --git a/_posts/2023-03-02-mobile-to-webview-bridge-with-rxjs-and-redux.md b/_posts/2023-03-02-mobile-to-webview-bridge-with-rxjs-and-redux.md index 59fe92b..05cb828 100644 --- a/_posts/2023-03-02-mobile-to-webview-bridge-with-rxjs-and-redux.md +++ b/_posts/2023-03-02-mobile-to-webview-bridge-with-rxjs-and-redux.md @@ -7,6 +7,7 @@ options: - full-bleed-cover excerpt: In this post, we'll take a look at how our team built a robust duplex bridge between our mobile and webview code. We'll also take a look at how we use RxJS observables to deal with messages from the bridge combined with actions dispatched from our React app that runs in the webview. keywords: RxJS, JavaScript, Promises, UI, Mobile, React, Redux +canonical: https://godaddy.com/resources/news/mobile-to-webview-bridge-with-rxjs-and-redux authors: - name: Hendrik Swanepoel title: Principal Engineer, Software Development diff --git a/_posts/2023-03-20-leveraging-ffis.md b/_posts/2023-03-20-leveraging-ffis.md index 8212709..0718453 100644 --- a/_posts/2023-03-20-leveraging-ffis.md +++ b/_posts/2023-03-20-leveraging-ffis.md @@ -7,6 +7,7 @@ options: - full-bleed-cover excerpt: By leveraging foreign function interface and C shared libraries, GoDaddy can unify the implementation of libraries in Go or Rust and share those libraries with other languages. keywords: FFI +canonical: https://godaddy.com/resources/news/leveraging-ffis authors: - name: Jeremiah Gowdy title: Senior Principal Architect diff --git a/_posts/2023-03-28-data-platform-evolution.md b/_posts/2023-03-28-data-platform-evolution.md index 99bf1e3..fb3a91f 100644 --- a/_posts/2023-03-28-data-platform-evolution.md +++ b/_posts/2023-03-28-data-platform-evolution.md @@ -7,6 +7,7 @@ options: - full-bleed-cover excerpt: This technical blog provides an in-depth look at the evolution of data at GoDaddy, highlighting the challenges faced along the way and the journey towards building a modern, low-cost cloud data platform. keywords: data platform, godaddy, data lake +canonical: https://godaddy.com/resources/news/data-platform-evolution authors: - name: Naren Parihar title: Sr. Director of Engineering diff --git a/_posts/2023-04-26-improving-company-agility-and-scale-in-the-cloud.md b/_posts/2023-04-26-improving-company-agility-and-scale-in-the-cloud.md index 9a0ce44..cb2494e 100644 --- a/_posts/2023-04-26-improving-company-agility-and-scale-in-the-cloud.md +++ b/_posts/2023-04-26-improving-company-agility-and-scale-in-the-cloud.md @@ -7,6 +7,7 @@ options: - full-bleed-cover excerpt: Learn how GoDaddy improved management, governance, and observability across its platforms using AWS. keywords: cloud platform, godaddy, agility, scale, aws +canonical: https://godaddy.com/resources/news/improving-company-agility-and-scale-in-the-cloud authors: - name: Ketan Patel title: Sr. Director of Software Development @@ -25,4 +26,4 @@ To read about how we continue to scale in order to deliver new value and feature If you want to be part of an awesome team that works to solve problems and build solutions for millions of small businesses, check out our [current open roles](https://careers.godaddy.com/search-jobs). -*Cover Photo Attribution: Photo by [William Bout](https://unsplash.com/@williambout) on [Unsplash](https://unsplash.com/photos/7cdFZmLlWOM).* +*Cover Photo Attribution: Photo by [William Bout](https://unsplash.com/@williambout) on [Unsplash](https://unsplash.com/photos/7cdFZmLlWOM).* diff --git a/_posts/2023-05-23-application-layer-encryption-in-ruby-on-rails-with-asherah.md b/_posts/2023-05-23-application-layer-encryption-in-ruby-on-rails-with-asherah.md index ed6f421..c4156fe 100644 --- a/_posts/2023-05-23-application-layer-encryption-in-ruby-on-rails-with-asherah.md +++ b/_posts/2023-05-23-application-layer-encryption-in-ruby-on-rails-with-asherah.md @@ -7,6 +7,7 @@ options: - full-bleed-cover excerpt: This article explores how we implement Application Layer Encryption in Ruby on Rails applications to protect customer-sensitive data with Asherah. keywords: ruby, ruby on rails, application layer encryption, security +canonical: https://godaddy.com/resources/news/application-layer-encryption-in-ruby-on-rails-with-asherah authors: - name: Dalibor Nasevic title: Sr. Principal Software Engineer diff --git a/_posts/2023-06-13-hosting-in-aws.md b/_posts/2023-06-13-hosting-in-aws.md index c84bd56..6d26e78 100644 --- a/_posts/2023-06-13-hosting-in-aws.md +++ b/_posts/2023-06-13-hosting-in-aws.md @@ -7,6 +7,7 @@ options: - full-bleed-cover excerpt: Learn how GoDaddy improved the performance and reliability of its on-prem Websites + Marketing hosting platform by migrating to AWS. keywords: cloud platform, godaddy, agility, scale, aws, hosting +canonical: https://godaddy.com/resources/news/hosting-in-aws authors: - name: Christopher Hinrichs title: Principal Software Engineer diff --git a/_posts/2023-06-23-ceph-storage.md b/_posts/2023-06-23-ceph-storage.md index 59626fa..2fa3230 100644 --- a/_posts/2023-06-23-ceph-storage.md +++ b/_posts/2023-06-23-ceph-storage.md @@ -7,6 +7,7 @@ options: - full-bleed-cover excerpt: Global Storage Engineering (GSE) migrated 2.5PB of Managed Word Press (MWP) data from vendor supported storage to opensource community supported Ceph storage utilizing CephFS in 9 months, resulting in improved customer experience. keywords: Ceph, CephFS, MWP, Managed Word Press, Storage +canonical: https://godaddy.com/resources/news/ceph-storage authors: - name: Joe Bardgett title: SRE Sr. Manager diff --git a/_posts/2023-08-07-lambda-rest-api-using-aws-cdk.md b/_posts/2023-08-07-lambda-rest-api-using-aws-cdk.md index 809c6ce..b32b6b6 100644 --- a/_posts/2023-08-07-lambda-rest-api-using-aws-cdk.md +++ b/_posts/2023-08-07-lambda-rest-api-using-aws-cdk.md @@ -7,6 +7,7 @@ options: - full-bleed-cover excerpt: Explore the step-by-step process of deploying an AWS lambda backed API using AWS CDK in this detailed guide. keywords: aws, godaddy, cdk, api +canonical: https://godaddy.com/resources/news/lambda-rest-api-using-aws-cdk authors: - name: Sushant Mimani title: Sr. Software Development Engineer diff --git a/_posts/2023-08-14-open-source-summit-north-america-2023.md b/_posts/2023-08-14-open-source-summit-north-america-2023.md index cd65bc8..ca730ff 100644 --- a/_posts/2023-08-14-open-source-summit-north-america-2023.md +++ b/_posts/2023-08-14-open-source-summit-north-america-2023.md @@ -7,6 +7,7 @@ options: - full-bleed-cover excerpt: The 2023 Linux Foundation Open Source Summit in Vacouver, WA was one of the key events for open source developers. GoDaddy sent representatives to the conference and 3 of them share their experiences. keywords: Open Source Summit, Linux Foundation, OpenJS Foundation, OSS +canonical: https://godaddy.com/resources/news/open-source-summit-north-america-2023 authors: - name: Courtney Robertson title: Open Source Developer Advocate diff --git a/_posts/2023-08-22-cpu-vulnerability-management.md b/_posts/2023-08-22-cpu-vulnerability-management.md index a5838d5..aad72d9 100644 --- a/_posts/2023-08-22-cpu-vulnerability-management.md +++ b/_posts/2023-08-22-cpu-vulnerability-management.md @@ -7,6 +7,7 @@ options: - full-bleed-cover excerpt: CPU vulnerabilities that expose sensitive data have become commonplace in the last few years. Learn more about these vulnerabilities and how GoDaddy responds to protect its customers. keywords: downfall, zenbleed, spectre, meltdown, vulnerability, mitigation, cpu, intel, amd +canonical: https://godaddy.com/resources/news/cpu-vulnerability-management authors: - name: Brian Diekelman title: Principal Software Engineer diff --git a/_posts/2023-09-05-cmdb.md b/_posts/2023-09-05-cmdb.md index 105173a..f692962 100644 --- a/_posts/2023-09-05-cmdb.md +++ b/_posts/2023-09-05-cmdb.md @@ -7,6 +7,7 @@ options: - full-bleed-cover excerpt: Configuration Management Database (CMDB) plays a vitally important role in the hyper-efficient operation of all GoDaddy products. This article explains how GoDaddy evolved its CMDB into a trustworthy source of truth. keywords: GoDaddy, Configuration, Management, Database, CMDB, Configuration, Item, Security, Compliance +canonical: https://godaddy.com/resources/news/cmdb authors: - name: David Koopman title: Principal Engineer, SRE VI diff --git a/_posts/2023-09-28-aws-cdk-adoption.md b/_posts/2023-09-28-aws-cdk-adoption.md index 9c12ca1..1f9521c 100644 --- a/_posts/2023-09-28-aws-cdk-adoption.md +++ b/_posts/2023-09-28-aws-cdk-adoption.md @@ -7,6 +7,7 @@ options: - full-bleed-cover excerpt: Learn how GoDaddy is helping its developers provision infrastructure quickly and securely using AWS Cloud Development Kit. keywords: cloud platform, godaddy, agility, scale, aws, cdk, cloud development kit +canonical: https://godaddy.com/resources/news/aws-cdk-adoption authors: - name: Ketan Patel title: Sr. Director of Software Development diff --git a/_posts/2023-10-26-layered-architecture-for-a-data-lake.md b/_posts/2023-10-26-layered-architecture-for-a-data-lake.md index a9b7296..47ddbf7 100644 --- a/_posts/2023-10-26-layered-architecture-for-a-data-lake.md +++ b/_posts/2023-10-26-layered-architecture-for-a-data-lake.md @@ -7,6 +7,7 @@ options: - full-bleed-cover excerpt: In this post, we discuss how GoDaddy uses Layered Architecture to build its Data Lake. keywords: Enterprise Data Layers, Analytics +canonical: https://godaddy.com/resources/news/layered-architecture-for-a-data-lake authors: - name: Kamran Ali title: former Principal Data Engineer diff --git a/_posts/2023-11-16-api-gateway-at-godaddy.md b/_posts/2023-11-16-api-gateway-at-godaddy.md index 17ff458..eff9133 100644 --- a/_posts/2023-11-16-api-gateway-at-godaddy.md +++ b/_posts/2023-11-16-api-gateway-at-godaddy.md @@ -7,6 +7,7 @@ options: - full-bleed-cover summary: We go over a new API Management initiative at GoDaddy using a self-serve API Gateway. keywords: godaddy, api gateway, api management, envoy, oauth2, authentication, authorization, rate limiting, observability +canonical: https://godaddy.com/resources/news/api-gateway-at-godaddy authors: - name: Carsten Blecken title: Sr. Principal Engineer @@ -20,7 +21,7 @@ authors: From domain registrations to commerce, GoDaddy is known to be the platform that entrepreneurs build their businesses on. There's an obvious need to provide simple and great user experiences through web and mobile so our customers (and their customers) can access all the services we offer. -While GoDaddy is the leader in registering domains for individual businesses, there is a large but lesser-known customer base of resellers and partners that rely on GoDaddy. These resellers and partners use different GoDaddy APIs to build their own offerings for various niche markets. The need to provide quality APIs to this group is essential for our customers to expand the reach of our solutions. +While GoDaddy is the leader in registering domains for individual businesses, there is a large but lesser-known customer base of resellers and partners that rely on GoDaddy. These resellers and partners use different GoDaddy APIs to build their own offerings for various niche markets. The need to provide quality APIs to this group is essential for our customers to expand the reach of our solutions. We laid the foundations for new omnichannel commerce solutions from GoDaddy in 2021 by building, consolidating, and optimizing various commerce services as a single, unified platform. This unified commerce platform drives all of our commerce solutions across all channels. Following standard modeling and design techniques like domain-driven design and an API-first approach has helped standardize all of our APIs and establish consistent authentication (authn) and authorization (authz) patterns. This ultimately led to building a common entry point for all our APIs to enforce consistent patterns and manage all APIs from a single location. @@ -123,10 +124,10 @@ Running Envoy inside [Amazon Elastic Kubernetes Service](https://aws.amazon.com/ Each region-based Kubernetes cluster is an AWS auto-scaling group, and nodes are easily scaled up through the cluster autoscaler. The cluster has three nodes in three different availability zones to maximize availability. Every 24 hours, these nodes are rotated to comply with our corporate security policy. -The gateway is set up in five different environments: +The gateway is set up in five different environments: - Experimental - the cluster used for the internal gateway development. - Development - the cluster used for the initial internal integration setup for services. -- Test - the cluster that allows service owners to run their automated integration tests. +- Test - the cluster that allows service owners to run their automated integration tests. - Staging (or OTE) - the cluster primarily used to run load and automated end-to-end tests and provides services in a production-like, highly-performing environment. - Production - the cluster where live traffic occurs. diff --git a/_posts/2023-11-20-emr-serverless-on-arm64.md b/_posts/2023-11-20-emr-serverless-on-arm64.md index 6ea1dc7..8545bd6 100644 --- a/_posts/2023-11-20-emr-serverless-on-arm64.md +++ b/_posts/2023-11-20-emr-serverless-on-arm64.md @@ -7,6 +7,7 @@ options: - full-bleed-cover excerpt: Learn how GoDaddy is helping its developers provision infrastructure quickly and securely using AWS Cloud Development Kit. keywords: Amazon EMR, Analytics, Best Practices, Graviton, Serverless +canonical: https://godaddy.com/resources/news/emr-serverless-on-arm64 authors: - name: Mukul Sharma title: Software Development Engineer diff --git a/_posts/2023-12-07-cloud-cost-management-aws.md b/_posts/2023-12-07-cloud-cost-management-aws.md index 2b84ff7..315568d 100644 --- a/_posts/2023-12-07-cloud-cost-management-aws.md +++ b/_posts/2023-12-07-cloud-cost-management-aws.md @@ -7,6 +7,7 @@ options: - full-bleed-cover excerpt: Learn how GoDaddy instills a cost-accountable culture as it continues to scale quickly and securely using AWS. keywords: cloud, cost management, finops, godaddy, agility, scale, aws +canonical: https://godaddy.com/resources/news/cloud-cost-management-aws authors: - name: Ketan Patel title: Sr. Director of Software Development diff --git a/_posts/2023-12-12-authorization-oauth-openfga.md b/_posts/2023-12-12-authorization-oauth-openfga.md index bfc9fd8..17adae4 100644 --- a/_posts/2023-12-12-authorization-oauth-openfga.md +++ b/_posts/2023-12-12-authorization-oauth-openfga.md @@ -7,6 +7,7 @@ options: - full-bleed-cover excerpt: In this post we discuss GoDaddy's adoption of OAuth and OpenFGA for fine-grained authorization. keywords: oauth, openfga, authentication, authorization, security, zanzibar +canonical: https://godaddy.com/resources/news/authorization-oauth-openfga authors: - name: Jacob Brooks title: Principal Software Engineer
Cciphertext sent by hacker