Ryan Sheppard

People are Good and your Ego is getting in the way

I’ve heard some consistent takes from friends and family saying “most people are idiots”, “society is getting dumber and dumber as we speak” and “people have no common sense”. These sayings are almost always ego driven, and it irks me every time I hear another version of the statement. I believe these views are wrong, and we need to be taking a different approach. People are good, and believing that you, or a subset of humanity are the only ones doing good is inherently ego-driven. The quicker we get to realizing this, the quicker we can get to advancing humanity. Everyone Thinks they are doing Good No one, not even the worst of humankind, thinks they are doing something bad. Everyone has a justification on why they are doing something, and it always boils down to them doing something they believe is fundamentally good. In 2001, Enron, a natural gas and pipeline company, filed for bankruptcy after attributing $40 to $45 billion to fraud. In 2013 Andy Fastow, the CFO of Enron at the time, is quoted in an interview with fortune.com as saying “When I was working at Enron, you know, I was kind of a hero, because I helped the company make its numbers every quarter. And I thought I was doing a good thing. I thought I was smart” (source). In 2022 Elizabeth Holmes was convicted on four counts of wire fraud after defrauding investors of approximately $140 million for claiming that her company had devised a device that could run blood tests accurately, rapidly and with a very minimal amount of blood. In 2014 during an interview with fortune.com, Holmes is cited as saying “This is about being able to do good” and later explains her motivations as “I genuinely don’t believe anything else matters more than when you love someone so much and you have to say goodbye too soon” (source). Even Al Capone, notorious criminal in the 1920s and 30s who committed crimes such as racketeering, extortion, murder and bribery is quoted as saying “I have spent the best years of my life giving people the lighter pleasures, helping them have a good time, and all I get is abuse, the existence of a hunted man”. Look, I’m not arguing that what these people have done is good — not at all — but I am trying to show that everyone has a justification for why what they’re doing is good. Understanding their motivations does not excuse their actions. We can simultaneously recognize someone’s good intentions while still holding them responsible for harm caused. We can seek to understand motivations while not accepting that all outcomes are equally good. I am not arguing that white supremacists, terrorists or anti-semites are good and if this is your response to my argument, you should read this paragraph again. Once you recognize this pattern — that everyone justifies their actions as good — you start to see the real problem isn’t that some people are evil while others are good. The problem is that we all think we’re the good ones. Your Ego is Getting in the way When you believe that you and people like you are the only “truly good” people, you’re claiming moral superiority over billions of people. This isn’t insight — it’s arrogance dressed up as wisdom. You’re placing yourself in an exclusive club of the enlightened while dismissing everyone else as fundamentally flawed. What are the odds that you — out of 8 billion people — are among the select few who truly “get it”? This is more than just an interpersonal problem. It’s reducing humanity’s problem-solving capacity. Understanding that people are trying to do good gives us more compassion and understanding for what others are trying to do. When we immediately discredit someone as evil, wrong, or stupid because we are part of the side doing good and they are part of the side doing wrong, we drop the ability for collaboration and eliminate the possibility of tapping into their knowledge, creativity, background, history and resources. I’ve unfortunately seen too many instances of this affecting how we live and reducing our ability to advance together. I still get notifications from a Facebook group I joined while living in my university town. The most engaged posts are often about problematic drivers, with comments like “this town’s driving ability has been going downhill year after year.” Or they’re about doorbell footage showing package theft, with people asking “what is wrong with humanity?”. Why are we painting such a wide brush given one person’s circumstances? I can go on about how alienation across all levels of the Canadian government — from federal to municipal to law enforcement — is also driven by ego and hinders human advancement, but I’ll save that for another post. Looking Forward I hope to see more instances of people looking for reasons to unite with one another rather than divide. We are all trying to do good, and I think that makes us good. If we weren’t so tied up with criticizing others and complaining how there is so much wrong with the world, we could all be more effective at identifying genuine threats and advancing humanity in the right direction. The sooner we realize this, the quicker we can get to advancing humanity.

Ryan's Individual Contributor Guide

This is what I look out for when reviewing Pull Requests. You can use this guide to develop your own opinion on code reviews, or as a reference for your colleagues to explain why you’re requesting a change. This guide is written for the individual contributor making a contribution to a codebase. Before Contributing Stop Tactical Programming Tactical programming is the type of programming where you look to complete the code as quick as possible. You may find the first solution you have to a problem and go with it, or use a previous solution you know may not be the best instead of looking for new solutions. This is great when you have very limited time to complete a task such as when the deadline is close, or when there is a critical bug in production. However most of programming should be done strategically. Strategic programming is when you produce code that is very well designed. Your primary focus while programming in this way should be designing your codebase to reduce complexity. Take a step back before writing some code and think about the context of what you will program. Is there a class that already exists which gets the job done? Are there multiple ways to design this? If so, which one is better? In the future, will a developer know what this class does just by looking at the interface? If a developer needs to make a change to my code, will it be easy to do? Am I introducing unnecessary complexity by creating this change? Be Pragmatic Understand the context in which you are working. For example, if you are working at a large company where there is less of a rush, you may be able to follow all code standards perfectly. You can take another day or two to mock that library you’ve been meaning to mock out to enable the test suite to have better tests. If you are working in a smaller company where you need to maximize the value you provide in a short time, it becomes more and more important to balance your contribution quality with the time it takes to make your contribution and not to strive for perfection. You may have to settle for one great test, a test that relies on implementation details, or no tests at all. Is what I’m contributing right now more important and urgent than any other contributions I could be making? Commits Logically Grouped Changes Your commits should contain logically grouped changes. For example, if you are going to refactor code such that it’s easier to implement a new feature, you should have two commits: one for refactoring the code, and one for implementing the new feature. How exactly you logically group your commits should be up to you and your team. I’ve seen some teams group their tests and the new feature they are testing in one single commit. This is important in the event of a breaking change. If your commit contains changes to your build script and a new feature, and your commit is named “add new feature”, it can be hard to understand at a glance why this new feature broke the build. Now, plenty of tools are out there to look at the git history of only the build script, however it may be difficult for new developers who don’t understand the codebase to find this. If a future developer viewed this commit, would there be any surprise code changes? Messages Should Explain Why Put simply, if I wanted to view what changes a commit made, I would look at the file differences for that commit. If I want to understand why a commit was made, I should be able to tell by the commit message. An important and under-utilized feature in git is multiline commits. If you enter a title in your commit (”fix: force client to load latest changes”), then separate your message by one empty line, you can fill in the rest of the commit message with the body to add additional context (”index.js was being cached by the client, and so the latest changes would not load. closes: #543”). Will a future developer understand why I made this commit? Pull Requests Always Review the Diff This may seem obvious, but I’m always surprised when I see parts of a pull request that could have been caught if the author had briefly reviewed their code. API keys, personal comments and debug logs are a few things I’ve seen in pull requests. Is there anything immediately obvious that shouldn’t be in my pull request? Testing Do not Test Implementation Details To the greatest extent possible, your tests should not rely on implementation details. Implementation details are the parts of your code that implement the feature, user story or interaction. Tests that rely on implementation details need to change each time the implementation changes which is much more frequent than any feature, user story or interaction. Writing your tests using implementation details are a major cause for a brittle test suite as tests will break each time time you write new code. Let’s say you are writing a React-Native application with some deeply nested and complicated navigation. It may be tempting to mock out navigation calls as a whole and simply assert they are called. You could imagine if a screen turned into a popup modal which uses a show/hide mechanism instead of a call to the navigation router, your test would fail because the navigation call is no longer called. The implementation has changed, but the feature has not. Now imagine if all your tests were based on implementation details. How many times would you break the test suite when you change an implementation? Am I testing an interaction with my application, or am I testing implementation details? Mock Inputs and Capture Outputs When writing tests, be concerned about user interaction. Anywhere the user may be interfacing with your software (e.g., through an API call, or through a UI) should be where you start your tests. Writing tests beyond user interaction, such as in unit tests, and the value you get for the time you put in greatly diminishes. In general, I follow this framework for writing test cases: Mock all inputs (e.g., GET requests, database reads) Trigger user interaction (e.g., Click on a button, call API function) Capture all outputs (e.g., POST requests, database writes) At the end of your tests, you should assert that the outputs are what you expected. Am I only mocking external inputs, and not implementation details? Linting It’s important that you follow some standard of code style guidelines in the project. A codebase with multiple styles is like reading a novel where the author switches tones randomly. Codebases with a single style, like books, increase comprehension by allowing for faster reading. If your codebase does not have an automatic linter yet, this might be a great feature to bring up with your manager. In the meantime it’s best to follow the style of whichever programmer came before you. Don’t start implementing your own style because you think it’s better, unless you can change the entire codebase to your style. Otherwise your change will most-likely get left for the next developer who introduces their own style until you have style spaghetti. Ideally, your codebase should have some sort of automatic continuous integration that runs on your pull requests to make sure your code follows the style guidelines of the project. If it doesn’t, this is another great feature to bring up with your manager to improve code quality. Does my contribution follow the style of the project? Logging Logging Levels Level Meaning Debug Very important message for local development. Information Useful for understanding the flow of control. No attention is required. Warning There was an issue although the task was able to continue. You should probably do something about this. Error There was an issue and the task has reached an unrecoverable state. You need to do something about this. Debug Logs Personally written debug logs can be useful, but usually aren’t. Do not commit your personal debug logs because they don’t have any meaning to anyone other than to yourself. If you are going to commit debug logs, they should be useful to the whole team and to future developers. Debug logs like this are not useful “failure”. It should instead be formatted like this: “POST /api/orders/${id} Unsuccessful. Reason: ${reason}”. Notice how the debug log now gives context at what point the flow of control is. The log is easily searchable in the codebase if there were issues. At this point, this log could be considered an information level log. Too many debug logs are bad, as it can become hard to find new logs you put for your own development. Each time you commit a debug log to the code base, it should be immediately obvious to everyone why it needs to be kept. Additionally, too many well-formatted debug logs should not be tolerated by your team. Will future developers understand why this log is needed? Data Models Be more concerned with your data models than your functions There is one thing that is guaranteed to live longer than your code, and that’s your data models. Data models are the fundamental abstraction that defines your codebase’s interactions. Those interactions change all the time but your data models will rarely change. Data models are very important for future developers to understand your application. Typing a Java response as a JsonNode, or labelling a typescript object with type “any” gives future developers no insight on what interactions they can make in your application. Even worse, they may add to your data model misconstruing the original abstraction meant to be made by your data model. Think hard about your data models, as they can make or break the future of your application. Can I make my data model more simple? References Philosophy of Software Design by John Ousterhout The Pragmatic Programmer by David Thomas, Andrew Hunt

How to Publish Request Metrics to AWS CloudWatch from Spring Boot

Ever wondered what’s the easiest way to publish Spring Boot request metrics to CloudWatch in AWS? In this article I explain the most common way to gather Spring Boot request metrics such as request timing and response status codes, then publish and plot the data in AWS. Prerequisites AWS Account Spring Boot Dependencies The application I am using in this article uses Spring Boot version 2.1.18.RELEASE, however the steps should be relatively similar for newer Spring Boot versions. Please note that these are the dependency versions that work for me for me based on my Spring Boot version. org.springframework.boot:spring-boot-starter-actuator software.amazon.awssdk:cloudwatch:2.17.281 io.micrometer:micrometer-core:1.5.17 io.micrometer:micrometer-registry-cloudwatch2:1.5.17 Micrometer Before we move on it’s important to talk about a library closely related to gathering metrics in Spring Boot. The most common way to gather metrics is through micrometer. Fundamental to micrometer is the concept of a “meter” which is an abstraction for collecting metrics data. Meters can be dimensional to allow for various tracking across time, and supports a wide range of data types such as timers, counters and gauges. Gathering Metrics Spring Boot Actuator does most of the legwork involved with collecting request metrics, although some configuration is required to get metrics production ready. In my application, I want to enable the least amount of metrics possible so that I won’t be spamming CloudWatch with metrics I’m not interested in. This keeps the data relatively clean, and is more scalable since you pay for the metrics you send to CloudWatch. The following configuration disables all metrics except for the request metrics I am interested in: management: metrics: enable: all: false http.server.requests: true Next, we need to expose these metrics so that AWS can fetch them. Luckily, Actuator provides us with another simple configuration parameter: management: endpoints: web: exposure: include: metrics All request metrics can now be accessed ${base_url}/actuator/metrics. When our application goes to publish metrics to AWS, it will use this endpoint to do so. Publishing Metrics For simplicity, we will be connecting to AWS using an access and secret key in plain text. In an ideal world, we would inject these secrets into our Spring application using some sort of secret manager like AWS Secrets Manager, however this could be a whole blog post in itself. In the following code, we are going to do two things: Build the CloudWatch client in our application by authenticating the application with our AWS account. Create the MeterRegistry where our metrics will be registered and stored in our application before they are sent at intervals to AWS. @Component public class CloudWatchUtil { private CloudWatchAsyncClient cloudWatch; @Bean private MeterRegistry meterRegistry() { AwsCredentials awsCreds = AwsBasicCredentials.create("aws-access-key", "aws-secret-key"); StaticCredentialsProvider scp = StaticCredentialsProvider.create(awsCreds); CloudWatchAsyncClientBuilder builder = CloudWatchAsyncClient.builder() .credentialsProvider(scp) .region(Region.of("aws-region")); cloudWatch = builder.build(); CloudWatchConfig cloudWatchConfig = new CloudWatchConfig() { @Override public String get(String key) { return null; } @Override public String namespace() { return "my-namespace"; } }; return new CloudWatchMeterRegistry(cloudWatchConfig, Clock.SYSTEM, cloudWatch); } } The CloudWatchConfig is a micrometer paradigm for customizing the CloudWatchMeterRegistry. You can use it to customize the registry. This documentation from micrometer explains how to do this. Returning null in get(String key) keeps all the defaults which sends metrics every minute to CloudWatch in the namespace(). ⚠️ The user/role your application runs on in AWS will need the “cloudwatch:PutMetricData” permission. Plotting Metrics Metrics are published in two different units to CloudWatch. http.requests.count is singular unit (called count) of HTTP requests. http.requests.avg, http.requests.max and http.requests.min are all request timings in milliseconds. Each request is tagged so that you can group them with the following information: Response status code. URI being called. Exception (if any). HTTP method. For example, I plotted a graph that shows the number of times a URI returns the 500 status code: I also plotted a graph that shows the average request timing for each URI: Future Improvements and Considerations As mentioned earlier, it would be best to inject your AWS secrets at run-time via a secret manager or using environment variables. This can be done by incorporating an AWS secret manager dependency (similar to the CloudWatch dependency) to securely import your secrets at run-time. Have any comments or improvements on the above article? Let me know by submitting a Pull Request on this website’s GitHub repository.

Authenticating GitHub Actions with AWS using Terraform

This article guides you through authorizing your GitHub Actions workflow with AWS and helps you to understand the underlying mechanisms. Prerequisites AWS Account. Some knowledge on AWS IAM (Identity Access Management) roles and policies. Terraform 1.8.4. A Terraform configuration linked to your AWS account (Tutorial). OpenID Connect OpenID Connect (OIDC) is the gold standard for authentication. It’s the simplest way to securely verify the identity of our GitHub Actions workflow with AWS. There are some great in-depth articles out there that can explain OIDC better than me like this one from the OpenID Foundation and this one from Okta, but I will briefly explain OIDC for the purpose of this article. OIDC runs on the OAuth2 authorization framework. OAuth2 doesn’t provide a method for verifying the identity of a user or entity, rather it grants access to resources (e.g., An admin-only page, a database or a set of images). Using the authorization capabilities of OAuth2, OIDC securely verifies the identity of a user or entity. OIDC grants a login “session” to the client so the client can use a single identity to request multiple resources. Using GitHub’s OIDC Identity Provider (IdP) server, we can grant a login session to our GitHub Action workflow from AWS so that we can work with our AWS account from GitHub Actions. Terraforming the OIDC Configuration When Terraform creates infrastructure in our AWS account, it assumes a role of an IAM user (which we should have set up when creating a Terraform configuration linked to our AWS account). Sometimes we need some information from this role (like the account ID) to complete certain actions. Since we don’t want to store these things in plain text, AWS has created the aws_caller_identity data source. data "aws_caller_identity" "current" {} We also need to define the IdP who is going to be giving out the identity of our GitHub Action to AWS. Similar to the aws_caller_identity data source, we need to define the location of this IdP in Terraform so that we can access information about the IdP later. data "aws_iam_openid_connect_provider" "github_actions" { url = "https://token.actions.githubusercontent.com" } Our GitHub Action identity must be registered in AWS in order for trust to be established between the two parties. To do this, we must create an IAM policy which uses information from both parties. Most of the information on the IAM policy principals and conditions for authorizing GitHub with AWS is going to be specific to GitHub and as such can be found in this article by GitHub. If you are having trouble establishing an authorized connection between GitHub and AWS, I would start with that article since there are a few nuances depending on our environment. The following is the IAM policy document that works for me. data "aws_iam_policy_document" "github_attachment_policy" { version = "2012-10-17" statement { sid = "" effect = "Allow" actions = ["sts:AssumeRoleWithWebIdentity"] principals { type = "Federated" identifiers = ["arn:aws:iam::${data.aws_caller_identity.current.account_id}:oidc-provider/token.actions.githubusercontent.com"] } condition { test = "StringEquals" variable = "token.actions.githubusercontent.com:sub" values = ["repo:${var.GITHUB_ORGANIZATION}/${var.GITHUB_REPO}:ref:refs/heads/${var.GIT_BRANCH}"] } condition { test = "StringEquals" variable = "token.actions.githubusercontent.com:aud" values = data.aws_iam_openid_connect_provider.github_actions.client_id_list } } } Take note how we are using the the aws_caller_identity and aws_iam_openid_connect_provider data sources from earlier. 💡 ${var.VARIABLE} is a Terraform variable. Learn how to set up Terraform variables here. 💡 ${var.GIT_BRANCH} safeguards against unauthorized infrastructure changes. I set the branch in my configuration and only certain people can approve changes to it. This, in addition to AWS IAM role's least privilege, further secures the GitHub action. Next we specify the AWS IAM role that the GitHub Action is going to assume, and attach the policy we created to this role. I’ve included the ${var.GITHUB_REPO} and ${var.GIT_BRANCH} in the name of this role for easy identification (in case we have multiple branches or roles). resource "aws_iam_role" "github_pipeline" { name = "github_${var.GITHUB_REPO}_${var.GIT_BRANCH}_branch" description = "Rule used by the github actions pipeline in the ${var.GITHUB_REPO} repository on the ${var.GIT_BRANCH} branch." max_session_duration = 3600 # 1 hour assume_role_policy = data.aws_iam_policy_document.github_attachment_policy.json } Authorizing in GitHub Actions workflow Finally we need to assume the role we created inside a GitHub Action workflow so that it can make changes in AWS. Make sure this workflow is running on the branch we specified in ${var.GIT_BRANCH} earlier. name: Authenticate with AWS on: push: branches: - 'branch_name' permissions: id-token: write jobs: authenticate-with-aws: steps: - name: Authenticate with AWS uses: aws-actions/configure-aws-credentials@v2 with: aws-region: ca-central-1 role-to-assume: "arn:aws:iam::927869390122:role/github_$_$_branch" The important things to note here are: Adding the id-token: write to the workflow so that the identity token can be fetched from GitHub’s IDP server and sent to AWS to authorize the workflow. Adding the role-to-assume as the role we created earlier in the “Authenticate with AWS” step that we created earlier using Terraform. Next Steps To make any changes to AWS, create a new IAM policy with only the minimal amount of permissions. Then, connect this policy to the github_pipeline role we created in this article. Future Improvements and Considerations Have any comments or improvements on the above article? Let me know by submitting a Pull Request on this website’s GitHub repository.

RBC Coop Work Term Report

After careful consideration, on October 21st, 2022 I made the decision to join RBC in downtown Toronto. I learned how large companies run their engineering teams, and about the dedicated employees who make up the bank. I met with many interesting directors and listened to thought-provoking executives. I had an amazing time at RBC, and I see the opportunities that RBC provides for each employee. Information About the Employer RBC needs no introduction; It’s the top bank in Canada. With locations across the country, it’s hard not to see an RBC branch, ATM or advertisement. Instead, I thought I’d share some interesting facts about the bank sourced from it’s employees: The corporate headquarters located at 200 Bay Street has 71,000g of 24-carat gold on it’s 14,000+ windows. The gold windows provide reflection from heat radiation keeping the building cool in the summer and warm in the winter. RBC will invest $500 million in their future launch initiative over a span of 10 years, helping young people like me get the job they want. Job Description In the first half of my work term, I moved many of my team’s legacy applications from RedHat OpenShift 3 to OpenShift 4. From writing code, to testing, to vulnerability management, quality assurance, deploying into production, and production implementation verification, I learned what it takes to get an application into production. In my second term, I worked on feature development for the Payment Orchestration squad. The service I worked on was responsible for orchestrating data from upstream services and sending it to the appropriate team who will process the payment. In this position, I got a better idea of the software development lifecycle, organization and culture at the bank. Across both terms, I participated in alternative opportunities. I started working with Bojan Nokovic, PH.D on an AI related project to help detect fraudulent sign ins. Additionally, I led two new developers on an internal project to streamline developer’s task management. I learned the most about myself and about the bank when I pursued these alternative opportunities. Goals Book 6 coffee chats with people from other teams ✅ I met as many people as I could at the bank because I wanted to understand engineering at a larger scale. Since I was a coop, I had the perfect excuse for messaging directors and asking them to chat over coffee. Here are some of the interesting people I met: Jim Miller (Recruiter). Jim is incredibly selfless. See my LinkedIn post about Jim. I wish him and his wife well, and I hope he gets everything out of life. Geoffrey Peart (Senior Director Digital Agile Practice). Jeff was around when RBC started to move from old school banking to digital banking. He is one of the pioneers for Agile within RBC. He gave valuable insight on agile, the Cynefin framework, and solving team challenges. Paul Chester (Director OpenShift Infrastructure). Paul has been with the bank for over 20 years. He told cool stories of moving application code from the mainframe into the cloud. I asked questions on how he runs his team, and he gave interesting answers like establishing processes so the team runs themselves, and delegating duties. Kevin Kwong-Chip (Senior Manager Open Banking Development). I asked Kevin questions related to team conflict. He gave tools and tips for dealing with personal frustrations and we had an interesting conversation about different types of workers. Become familiar with at least one popular technology used at RBC ✅ I made this goal in my first term when I wasn’t excited about the technology I would be using. So, I wanted to dedicate some time to at least one marketable technical skill. I’m happy to say that I’ve come out of RBC with many technical skills. RedHat OpenShift Container Platform 4, Jenkins, Spring Boot 3, and Java 17 are just some of the technologies I am now familiar with. I’m very pleased that I was able to work with so many different technologies used across the industry during my work term at RBC. Make contributions to at least one inner source repository ✅ RBC’s “inner source” is an open source ecosystem for RBC employees. It contains software like Angular component libraries, Linux Docker containers preinstalled with SSL certificates, and Spring OIDC components. I wanted to make a lasting impact at RBC, and this was a way for me to do that. I was able to add a missing link to some documentation, but I didn’t dedicate nearly enough time on this goal to make a lasting impact. Luckily, I believe I contributed enough to my team to make the impact this goal was originally trying to make. Nonetheless, I did make a contribution to an inner source repository, completing my goal. Lead an internal project ✅ At my second coffee chat, Jim Miller mentioned the best way to improve confidence at work is to find areas where I can apply leadership. Solidifying your position as a leader in some aspect makes you feel useful and established which helps with confidence. I found an opportunity to lead two new developers on a project, to raise their technical knowledge and to establish some foundations they wouldn’t get to learn in their day-to-day. Overall, we faced man different productivity and motivation challenges, but I’m happy to say that we came out of the project with something instead of nothing. I think both developers were appreciative for my guidance despite not having come as far as we wanted to. Migrate a Spring Boot 2 application to Spring Boot 3 ✅ I knew this goal was going to be my largest while at the bank, so I set this goal so I would focus on completing it within my term. I’m happy to say I am the first to upgrade one of my team’s services from SB 2 to SB 3. This is important because it creates a guide for all other SB upgrades within my team. The challenge with upgrading SB applications is not upgrading SB itself (they provide an awesome guide on doing that) but rather all the third-party applications that need to be updated for compatibility with SB. For example, my app’s Spring Security component required major updates that necessitated deep understanding of Spring Security. Sometimes a method is removed in an upgrade and I had to hunt down the new method replacement. You can read more about another challenge I had when upgrading Spring Boot in this blog post. Conclusions The beginning of my time at RBC was tumultuous, but it really turned around. I’m grateful to have met the people I got to meet, and to have the experiences I have. If I had to do it all over again, I would; and I think that’s a good indication of my time at RBC. Acknowledgements Zhiming Xu (Lead Software Developer) (pictured above on the left) was a major influence for my return to RBC in the summer. My winter term was hard, but Zhiming guided me and reassured that our project was unusually difficult. He inspires me to be more resilient and patient when things are awry. Pradeep Sappa (Manager Payment Orchestration Services) (on the right) added fuel to my fire. His care and concern are unmatched and he consistently donated his time to teach me and listen to my concerns. His technical knowledge is rich; I have a lot to learn from him. Shubhi Gupta (Director Digital Security and Shared Services) is an integral part of my success at the bank. She helped me make multiple connections, and gave me many opportunities to flourish. Bojan Nokovic, PH.D (Principal Engineer and Research Scientist) gave many alternative opportunities at RBC. I worked with him to maintain an AI project he led a few years ago, and asked me to review a paper and presentation he was publishing in an AI journal. He’s given me my first real exposure to academia and I’m very grateful for the opportunities he handed me.

Managing Multiple Release Versions

Recently, I made a mistake while releasing a new version of a component. My team maintains various Spring components whose versions match Spring Boot’s release versions. With the release of Spring Boot 3, I made the change to upgrade our component. A few weeks later, a colleague needed to make a change to an older version of the component. I wasn’t sure what to do in this situation, which took me down a rabbit hole on how others manage multiple release versions. Spring Boot For Spring Boot, everything that goes on the main branch is preparation for the next release version. When a minor version is released, a new branch is created off of main. Interestingly, each branch includes both the major and the minor version for each release (for example, 2.3.x or 3.1.x). This is an obvious strategy to accomodate their support policy. If a change to an older release is needed, the Spring Boot team will merge the older release branch with the change into the newer release branches until the change is merged into the main branch. The Spring Boot team uses very little automation in their release strategy. Each pull request goes through a rigorous build process, but maybe this is all that is required in a repository that is intensely active and frequently changed. Managing multiple minor versions requires more work, but the Spring Boot team has the amount of work down to just the essentials. In my research, this is by far the most popular release strategy for managing multiple versions. Other projects that use this type of release strategy are the Phoenix Framework, Scala, and Kafka. Vue Vue uses different repositories to manage their major versions. Since changes won’t need to be merged upstream, and the codebases are largely different, this is an appropriate design for managing multiple versions. Where Vue gets interesting, is when it needs to make a change to a previous version. When Vue 3 needs to revert a change, for example, they will release the change under a new incremented version number instead of fixing the old version and merging the changes upstream. Alternative Considerations Feature Toggle It’s important to consider whether managing multiple versions is really necessary. A common alternative is enabling feature toggles for new changes. For example, you can first release a change into the wild and gradually enable it for a subset of your users. Once it’s been well tested by more than half of users, it’s safe to release the feature and remove the toggle from your code. This type of release strategy reduces the need for managing multiple versions since changes should be well tested, and there should be no need to “go back” and make a change. Non-decreasing Versioning Code that is not depended upon by other applications most likely doesn’t need multiple release versions. Web applications are a good example of code that doesn’t need multiple release versions. If a previous version needs a change, it can be added into the next release. Since services that may depend on your application are interfaced via some API, not a dependency version, releasing the new change in an incremented version works just fine and solves many of the headaches that come with managing multiple release versions. Conclusions Every company will have their own strategies for maintaining multiple releases of the same component. Establishing a support policy will help decide the effort required to maintain multiple releases, or if maintenance of multiple releases is really needed at all. In my case, since our component is matches the release version of Spring Boot, it makes sense to adopt the same release strategy Spring Boot uses for releasing their framework.