Ryan Sheppard

Ryan's Individual Contributor Guide

This is what I look out for when reviewing Pull Requests. You can use this guide to develop your own opinion on code reviews, or as a reference for your colleagues to explain why you’re requesting a change. This guide is written for the individual contributor making a contribution to a codebase. Before Contributing Stop Tactical Programming Tactical programming is the type of programming where you look to complete the code as quick as possible. You may find the first solution you have to a problem and go with it, or use a previous solution you know may not be the best instead of looking for new solutions. This is great when you have very limited time to complete a task such as when the deadline is close, or when there is a critical bug in production. However most of programming should be done strategically. Strategic programming is when you produce code that is very well designed. Your primary focus while programming in this way should be designing your codebase to reduce complexity. Take a step back before writing some code and think about the context of what you will program. Is there a class that already exists which gets the job done? Are there multiple ways to design this? If so, which one is better? In the future, will a developer know what this class does just by looking at the interface? If a developer needs to make a change to my code, will it be easy to do? Am I introducing unnecessary complexity by creating this change? Be Pragmatic Understand the context in which you are working. For example, if you are working at a large company where there is less of a rush, you may be able to follow all code standards perfectly. You can take another day or two to mock that library you’ve been meaning to mock out to enable the test suite to have better tests. If you are working in a smaller company where you need to maximize the value you provide in a short time, it becomes more and more important to balance your contribution quality with the time it takes to make your contribution and not to strive for perfection. You may have to settle for one great test, a test that relies on implementation details, or no tests at all. Is what I’m contributing right now more important and urgent than any other contributions I could be making? Commits Logically Grouped Changes Your commits should contain logically grouped changes. For example, if you are going to refactor code such that it’s easier to implement a new feature, you should have two commits: one for refactoring the code, and one for implementing the new feature. How exactly you logically group your commits should be up to you and your team. I’ve seen some teams group their tests and the new feature they are testing in one single commit. This is important in the event of a breaking change. If your commit contains changes to your build script and a new feature, and your commit is named “add new feature”, it can be hard to understand at a glance why this new feature broke the build. Now, plenty of tools are out there to look at the git history of only the build script, however it may be difficult for new developers who don’t understand the codebase to find this. If a future developer viewed this commit, would there be any surprise code changes? Messages Should Explain Why Put simply, if I wanted to view what changes a commit made, I would look at the file differences for that commit. If I want to understand why a commit was made, I should be able to tell by the commit message. An important and under-utilized feature in git is multiline commits. If you enter a title in your commit (”fix: force client to load latest changes”), then separate your message by one empty line, you can fill in the rest of the commit message with the body to add additional context (”index.js was being cached by the client, and so the latest changes would not load. closes: #543”). Will a future developer understand why I made this commit? Pull Requests Always Review the Diff This may seem obvious, but I’m always surprised when I see parts of a pull request that could have been caught if the author had briefly reviewed their code. API keys, personal comments and debug logs are a few things I’ve seen in pull requests. Is there anything immediately obvious that shouldn’t be in my pull request? Testing Do not Test Implementation Details To the greatest extent possible, your tests should not rely on implementation details. Implementation details are the parts of your code that implement the feature, user story or interaction. Tests that rely on implementation details need to change each time the implementation changes which is much more frequent than any feature, user story or interaction. Writing your tests using implementation details are a major cause for a brittle test suite as tests will break each time time you write new code. Let’s say you are writing a React-Native application with some deeply nested and complicated navigation. It may be tempting to mock out navigation calls as a whole and simply assert they are called. You could imagine if a screen turned into a popup modal which uses a show/hide mechanism instead of a call to the navigation router, your test would fail because the navigation call is no longer called. The implementation has changed, but the feature has not. Now imagine if all your tests were based on implementation details. How many times would you break the test suite when you change an implementation? Am I testing an interaction with my application, or am I testing implementation details? Mock Inputs and Capture Outputs When writing tests, be concerned about user interaction. Anywhere the user may be interfacing with your software (e.g., through an API call, or through a UI) should be where you start your tests. Writing tests beyond user interaction, such as in unit tests, and the value you get for the time you put in greatly diminishes. In general, I follow this framework for writing test cases: Mock all inputs (e.g., GET requests, database reads) Trigger user interaction (e.g., Click on a button, call API function) Capture all outputs (e.g., POST requests, database writes) At the end of your tests, you should assert that the outputs are what you expected. Am I only mocking external inputs, and not implementation details? Linting It’s important that you follow some standard of code style guidelines in the project. A codebase with multiple styles is like reading a novel where the author switches tones randomly. Codebases with a single style, like books, increase comprehension by allowing for faster reading. If your codebase does not have an automatic linter yet, this might be a great feature to bring up with your manager. In the meantime it’s best to follow the style of whichever programmer came before you. Don’t start implementing your own style because you think it’s better, unless you can change the entire codebase to your style. Otherwise your change will most-likely get left for the next developer who introduces their own style until you have style spaghetti. Ideally, your codebase should have some sort of automatic continuous integration that runs on your pull requests to make sure your code follows the style guidelines of the project. If it doesn’t, this is another great feature to bring up with your manager to improve code quality. Does my contribution follow the style of the project? Logging Logging Levels Level Meaning Debug Very important message for local development. Information Useful for understanding the flow of control. No attention is required. Warning There was an issue although the task was able to continue. You should probably do something about this. Error There was an issue and the task has reached an unrecoverable state. You need to do something about this. Debug Logs Personally written debug logs can be useful, but usually aren’t. Do not commit your personal debug logs because they don’t have any meaning to anyone other than to yourself. If you are going to commit debug logs, they should be useful to the whole team and to future developers. Debug logs like this are not useful “failure”. It should instead be formatted like this: “POST /api/orders/${id} Unsuccessful. Reason: ${reason}”. Notice how the debug log now gives context at what point the flow of control is. The log is easily searchable in the codebase if there were issues. At this point, this log could be considered an information level log. Too many debug logs are bad, as it can become hard to find new logs you put for your own development. Each time you commit a debug log to the code base, it should be immediately obvious to everyone why it needs to be kept. Additionally, too many well-formatted debug logs should not be tolerated by your team. Will future developers understand why this log is needed? Data Models Be more concerned with your data models than your functions There is one thing that is guaranteed to live longer than your code, and that’s your data models. Data models are the fundamental abstraction that defines your codebase’s interactions. Those interactions change all the time but your data models will rarely change. Data models are very important for future developers to understand your application. Typing a Java response as a JsonNode, or labelling a typescript object with type “any” gives future developers no insight on what interactions they can make in your application. Even worse, they may add to your data model misconstruing the original abstraction meant to be made by your data model. Think hard about your data models, as they can make or break the future of your application. Can I make my data model more simple? References Philosophy of Software Design by John Ousterhout The Pragmatic Programmer by David Thomas, Andrew Hunt

How to Publish Request Metrics to AWS CloudWatch from Spring Boot

Ever wondered what’s the easiest way to publish Spring Boot request metrics to CloudWatch in AWS? In this article I explain the most common way to gather Spring Boot request metrics such as request timing and response status codes, then publish and plot the data in AWS. Prerequisites AWS Account Spring Boot Dependencies The application I am using in this article uses Spring Boot version 2.1.18.RELEASE, however the steps should be relatively similar for newer Spring Boot versions. Please note that these are the dependency versions that work for me for me based on my Spring Boot version. org.springframework.boot:spring-boot-starter-actuator software.amazon.awssdk:cloudwatch:2.17.281 io.micrometer:micrometer-core:1.5.17 io.micrometer:micrometer-registry-cloudwatch2:1.5.17 Micrometer Before we move on it’s important to talk about a library closely related to gathering metrics in Spring Boot. The most common way to gather metrics is through micrometer. Fundamental to micrometer is the concept of a “meter” which is an abstraction for collecting metrics data. Meters can be dimensional to allow for various tracking across time, and supports a wide range of data types such as timers, counters and gauges. Gathering Metrics Spring Boot Actuator does most of the legwork involved with collecting request metrics, although some configuration is required to get metrics production ready. In my application, I want to enable the least amount of metrics possible so that I won’t be spamming CloudWatch with metrics I’m not interested in. This keeps the data relatively clean, and is more scalable since you pay for the metrics you send to CloudWatch. The following configuration disables all metrics except for the request metrics I am interested in: management: metrics: enable: all: false http.server.requests: true Next, we need to expose these metrics so that AWS can fetch them. Luckily, Actuator provides us with another simple configuration parameter: management: endpoints: web: exposure: include: metrics All request metrics can now be accessed ${base_url}/actuator/metrics. When our application goes to publish metrics to AWS, it will use this endpoint to do so. Publishing Metrics For simplicity, we will be connecting to AWS using an access and secret key in plain text. In an ideal world, we would inject these secrets into our Spring application using some sort of secret manager like AWS Secrets Manager, however this could be a whole blog post in itself. In the following code, we are going to do two things: Build the CloudWatch client in our application by authenticating the application with our AWS account. Create the MeterRegistry where our metrics will be registered and stored in our application before they are sent at intervals to AWS. @Component public class CloudWatchUtil { private CloudWatchAsyncClient cloudWatch; @Bean private MeterRegistry meterRegistry() { AwsCredentials awsCreds = AwsBasicCredentials.create("aws-access-key", "aws-secret-key"); StaticCredentialsProvider scp = StaticCredentialsProvider.create(awsCreds); CloudWatchAsyncClientBuilder builder = CloudWatchAsyncClient.builder() .credentialsProvider(scp) .region(Region.of("aws-region")); cloudWatch = builder.build(); CloudWatchConfig cloudWatchConfig = new CloudWatchConfig() { @Override public String get(String key) { return null; } @Override public String namespace() { return "my-namespace"; } }; return new CloudWatchMeterRegistry(cloudWatchConfig, Clock.SYSTEM, cloudWatch); } } The CloudWatchConfig is a micrometer paradigm for customizing the CloudWatchMeterRegistry. You can use it to customize the registry. This documentation from micrometer explains how to do this. Returning null in get(String key) keeps all the defaults which sends metrics every minute to CloudWatch in the namespace(). ⚠️ The user/role your application runs on in AWS will need the “cloudwatch:PutMetricData” permission. Plotting Metrics Metrics are published in two different units to CloudWatch. http.requests.count is singular unit (called count) of HTTP requests. http.requests.avg, http.requests.max and http.requests.min are all request timings in milliseconds. Each request is tagged so that you can group them with the following information: Response status code. URI being called. Exception (if any). HTTP method. For example, I plotted a graph that shows the number of times a URI returns the 500 status code: I also plotted a graph that shows the average request timing for each URI: Future Improvements and Considerations As mentioned earlier, it would be best to inject your AWS secrets at run-time via a secret manager or using environment variables. This can be done by incorporating an AWS secret manager dependency (similar to the CloudWatch dependency) to securely import your secrets at run-time. Have any comments or improvements on the above article? Let me know by submitting a Pull Request on this website’s GitHub repository.

Authenticating GitHub Actions with AWS using Terraform

This article guides you through authorizing your GitHub Actions workflow with AWS and helps you to understand the underlying mechanisms. Prerequisites AWS Account. Some knowledge on AWS IAM (Identity Access Management) roles and policies. Terraform 1.8.4. A Terraform configuration linked to your AWS account (Tutorial). OpenID Connect OpenID Connect (OIDC) is the gold standard for authentication. It’s the simplest way to securely verify the identity of our GitHub Actions workflow with AWS. There are some great in-depth articles out there that can explain OIDC better than me like this one from the OpenID Foundation and this one from Okta, but I will briefly explain OIDC for the purpose of this article. OIDC runs on the OAuth2 authorization framework. OAuth2 doesn’t provide a method for verifying the identity of a user or entity, rather it grants access to resources (e.g., An admin-only page, a database or a set of images). Using the authorization capabilities of OAuth2, OIDC securely verifies the identity of a user or entity. OIDC grants a login “session” to the client so the client can use a single identity to request multiple resources. Using GitHub’s OIDC Identity Provider (IdP) server, we can grant a login session to our GitHub Action workflow from AWS so that we can work with our AWS account from GitHub Actions. Terraforming the OIDC Configuration When Terraform creates infrastructure in our AWS account, it assumes a role of an IAM user (which we should have set up when creating a Terraform configuration linked to our AWS account). Sometimes we need some information from this role (like the account ID) to complete certain actions. Since we don’t want to store these things in plain text, AWS has created the aws_caller_identity data source. data "aws_caller_identity" "current" {} We also need to define the IdP who is going to be giving out the identity of our GitHub Action to AWS. Similar to the aws_caller_identity data source, we need to define the location of this IdP in Terraform so that we can access information about the IdP later. data "aws_iam_openid_connect_provider" "github_actions" { url = "https://token.actions.githubusercontent.com" } Our GitHub Action identity must be registered in AWS in order for trust to be established between the two parties. To do this, we must create an IAM policy which uses information from both parties. Most of the information on the IAM policy principals and conditions for authorizing GitHub with AWS is going to be specific to GitHub and as such can be found in this article by GitHub. If you are having trouble establishing an authorized connection between GitHub and AWS, I would start with that article since there are a few nuances depending on our environment. The following is the IAM policy document that works for me. data "aws_iam_policy_document" "github_attachment_policy" { version = "2012-10-17" statement { sid = "" effect = "Allow" actions = ["sts:AssumeRoleWithWebIdentity"] principals { type = "Federated" identifiers = ["arn:aws:iam::${data.aws_caller_identity.current.account_id}:oidc-provider/token.actions.githubusercontent.com"] } condition { test = "StringEquals" variable = "token.actions.githubusercontent.com:sub" values = ["repo:${var.GITHUB_ORGANIZATION}/${var.GITHUB_REPO}:ref:refs/heads/${var.GIT_BRANCH}"] } condition { test = "StringEquals" variable = "token.actions.githubusercontent.com:aud" values = data.aws_iam_openid_connect_provider.github_actions.client_id_list } } } Take note how we are using the the aws_caller_identity and aws_iam_openid_connect_provider data sources from earlier. 💡 ${var.VARIABLE} is a Terraform variable. Learn how to set up Terraform variables here. 💡 ${var.GIT_BRANCH} safeguards against unauthorized infrastructure changes. I set the branch in my configuration and only certain people can approve changes to it. This, in addition to AWS IAM role's least privilege, further secures the GitHub action. Next we specify the AWS IAM role that the GitHub Action is going to assume, and attach the policy we created to this role. I’ve included the ${var.GITHUB_REPO} and ${var.GIT_BRANCH} in the name of this role for easy identification (in case we have multiple branches or roles). resource "aws_iam_role" "github_pipeline" { name = "github_${var.GITHUB_REPO}_${var.GIT_BRANCH}_branch" description = "Rule used by the github actions pipeline in the ${var.GITHUB_REPO} repository on the ${var.GIT_BRANCH} branch." max_session_duration = 3600 # 1 hour assume_role_policy = data.aws_iam_policy_document.github_attachment_policy.json } Authorizing in GitHub Actions workflow Finally we need to assume the role we created inside a GitHub Action workflow so that it can make changes in AWS. Make sure this workflow is running on the branch we specified in ${var.GIT_BRANCH} earlier. name: Authenticate with AWS on: push: branches: - 'branch_name' permissions: id-token: write jobs: authenticate-with-aws: steps: - name: Authenticate with AWS uses: aws-actions/configure-aws-credentials@v2 with: aws-region: ca-central-1 role-to-assume: "arn:aws:iam::927869390122:role/github_$_$_branch" The important things to note here are: Adding the id-token: write to the workflow so that the identity token can be fetched from GitHub’s IDP server and sent to AWS to authorize the workflow. Adding the role-to-assume as the role we created earlier in the “Authenticate with AWS” step that we created earlier using Terraform. Next Steps To make any changes to AWS, create a new IAM policy with only the minimal amount of permissions. Then, connect this policy to the github_pipeline role we created in this article. Future Improvements and Considerations Have any comments or improvements on the above article? Let me know by submitting a Pull Request on this website’s GitHub repository.

W23/S23 Work Term Report

After careful consideration, on October 21st, 2022 I made the decision to join RBC in downtown Toronto. I learned how large companies run their engineering teams, and about the dedicated employees who make up the bank. I met with many interesting directors and listened to thought-provoking executives. I had an amazing time at RBC, and I see the opportunities that RBC provides for each employee. Information About the Employer RBC needs no introduction; It’s the top bank in Canada. With locations across the country, it’s hard not to see an RBC branch, ATM or advertisement. Instead, I thought I’d share some interesting facts about the bank sourced from it’s employees: The corporate headquarters located at 200 Bay Street has 71,000g of 24-carat gold on it’s 14,000+ windows. The gold windows provide reflection from heat radiation keeping the building cool in the summer and warm in the winter. RBC will invest $500 million in their future launch initiative over a span of 10 years, helping young people like me get the job they want. Job Description In the first half of my work term, I moved many of my team’s legacy applications from RedHat OpenShift 3 to OpenShift 4. From writing code, to testing, to vulnerability management, quality assurance, deploying into production, and production implementation verification, I learned what it takes to get an application into production. In my second term, I worked on feature development for the Payment Orchestration squad. The service I worked on was responsible for orchestrating data from upstream services and sending it to the appropriate team who will process the payment. In this position, I got a better idea of the software development lifecycle, organization and culture at the bank. Across both terms, I participated in alternative opportunities. I started working with Bojan Nokovic, PH.D on an AI related project to help detect fraudulent sign ins. Additionally, I led two new developers on an internal project to streamline developer’s task management. I learned the most about myself and about the bank when I pursued these alternative opportunities. Goals Book 6 coffee chats with people from other teams ✅ I met as many people as I could at the bank because I wanted to understand engineering at a larger scale. Since I was a coop, I had the perfect excuse for messaging directors and asking them to chat over coffee. Here are some of the interesting people I met: Jim Miller (Recruiter). Jim is incredibly selfless. See my LinkedIn post about Jim. I wish him and his wife well, and I hope he gets everything out of life. Geoffrey Peart (Senior Director Digital Agile Practice). Jeff was around when RBC started to move from old school banking to digital banking. He is one of the pioneers for Agile within RBC. He gave valuable insight on agile, the Cynefin framework, and solving team challenges. Paul Chester (Director OpenShift Infrastructure). Paul has been with the bank for over 20 years. He told cool stories of moving application code from the mainframe into the cloud. I asked questions on how he runs his team, and he gave interesting answers like establishing processes so the team runs themselves, and delegating duties. Kevin Kwong-Chip (Senior Manager Open Banking Development). I asked Kevin questions related to team conflict. He gave tools and tips for dealing with personal frustrations and we had an interesting conversation about different types of workers. Become familiar with at least one popular technology used at RBC ✅ I made this goal in my first term when I wasn’t excited about the technology I would be using. So, I wanted to dedicate some time to at least one marketable technical skill. I’m happy to say that I’ve come out of RBC with many technical skills. RedHat OpenShift Container Platform 4, Jenkins, Spring Boot 3, and Java 17 are just some of the technologies I am now familiar with. I’m very pleased that I was able to work with so many different technologies used across the industry during my work term at RBC. Make contributions to at least one inner source repository ✅ RBC’s “inner source” is an open source ecosystem for RBC employees. It contains software like Angular component libraries, Linux Docker containers preinstalled with SSL certificates, and Spring OIDC components. I wanted to make a lasting impact at RBC, and this was a way for me to do that. I was able to add a missing link to some documentation, but I didn’t dedicate nearly enough time on this goal to make a lasting impact. Luckily, I believe I contributed enough to my team to make the impact this goal was originally trying to make. Nonetheless, I did make a contribution to an inner source repository, completing my goal. Lead an internal project ✅ At my second coffee chat, Jim Miller mentioned the best way to improve confidence at work is to find areas where I can apply leadership. Solidifying your position as a leader in some aspect makes you feel useful and established which helps with confidence. I found an opportunity to lead two new developers on a project, to raise their technical knowledge and to establish some foundations they wouldn’t get to learn in their day-to-day. Overall, we faced man different productivity and motivation challenges, but I’m happy to say that we came out of the project with something instead of nothing. I think both developers were appreciative for my guidance despite not having come as far as we wanted to. Migrate a Spring Boot 2 application to Spring Boot 3 ✅ I knew this goal was going to be my largest while at the bank, so I set this goal so I would focus on completing it within my term. I’m happy to say I am the first to upgrade one of my team’s services from SB 2 to SB 3. This is important because it creates a guide for all other SB upgrades within my team. The challenge with upgrading SB applications is not upgrading SB itself (they provide an awesome guide on doing that) but rather all the third-party applications that need to be updated for compatibility with SB. For example, my app’s Spring Security component required major updates that necessitated deep understanding of Spring Security. Sometimes a method is removed in an upgrade and I had to hunt down the new method replacement. You can read more about another challenge I had when upgrading Spring Boot in this blog post. Conclusions The beginning of my time at RBC was tumultuous, but it really turned around. I’m grateful to have met the people I got to meet, and to have the experiences I have. If I had to do it all over again, I would; and I think that’s a good indication of my time at RBC. Acknowledgements Zhiming Xu (Lead Software Developer) (pictured above on the left) was a major influence for my return to RBC in the summer. My winter term was hard, but Zhiming guided me and reassured that our project was unusually difficult. He inspires me to be more resilient and patient when things are awry. Pradeep Sappa (Manager Payment Orchestration Services) (on the right) added fuel to my fire. His care and concern are unmatched and he consistently donated his time to teach me and listen to my concerns. His technical knowledge is rich; I have a lot to learn from him. Shubhi Gupta (Director Digital Security and Shared Services) is an integral part of my success at the bank. She helped me make multiple connections, and gave me many opportunities to flourish. Bojan Nokovic, PH.D (Principal Engineer and Research Scientist) gave many alternative opportunities at RBC. I worked with him to maintain an AI project he led a few years ago, and asked me to review a paper and presentation he was publishing in an AI journal. He’s given me my first real exposure to academia and I’m very grateful for the opportunities he handed me.

Managing Multiple Release Versions

Recently, I made a mistake while releasing a new version of a component. My team maintains various Spring components whose versions match Spring Boot’s release versions. With the release of Spring Boot 3, I made the change to upgrade our component. A few weeks later, a colleague needed to make a change to an older version of the component. I wasn’t sure what to do in this situation, which took me down a rabbit hole on how others manage multiple release versions. Spring Boot For Spring Boot, everything that goes on the main branch is preparation for the next release version. When a minor version is released, a new branch is created off of main. Interestingly, each branch includes both the major and the minor version for each release (for example, 2.3.x or 3.1.x). This is an obvious strategy to accomodate their support policy. If a change to an older release is needed, the Spring Boot team will merge the older release branch with the change into the newer release branches until the change is merged into the main branch. The Spring Boot team uses very little automation in their release strategy. Each pull request goes through a rigorous build process, but maybe this is all that is required in a repository that is intensely active and frequently changed. Managing multiple minor versions requires more work, but the Spring Boot team has the amount of work down to just the essentials. In my research, this is by far the most popular release strategy for managing multiple versions. Other projects that use this type of release strategy are the Phoenix Framework, Scala, and Kafka. Vue Vue uses different repositories to manage their major versions. Since changes won’t need to be merged upstream, and the codebases are largely different, this is an appropriate design for managing multiple versions. Where Vue gets interesting, is when it needs to make a change to a previous version. When Vue 3 needs to revert a change, for example, they will release the change under a new incremented version number instead of fixing the old version and merging the changes upstream. Alternative Considerations Feature Toggle It’s important to consider whether managing multiple versions is really necessary. A common alternative is enabling feature toggles for new changes. For example, you can first release a change into the wild and gradually enable it for a subset of your users. Once it’s been well tested by more than half of users, it’s safe to release the feature and remove the toggle from your code. This type of release strategy reduces the need for managing multiple versions since changes should be well tested, and there should be no need to “go back” and make a change. Non-decreasing Versioning Code that is not depended upon by other applications most likely doesn’t need multiple release versions. Web applications are a good example of code that doesn’t need multiple release versions. If a previous version needs a change, it can be added into the next release. Since services that may depend on your application are interfaced via some API, not a dependency version, releasing the new change in an incremented version works just fine and solves many of the headaches that come with managing multiple release versions. Conclusions Every company will have their own strategies for maintaining multiple releases of the same component. Establishing a support policy will help decide the effort required to maintain multiple releases, or if maintenance of multiple releases is really needed at all. In my case, since our component is matches the release version of Spring Boot, it makes sense to adopt the same release strategy Spring Boot uses for releasing their framework.

S22 Work Term Report

Some things have changed, and some things have remained the same compared to my last coop work term. Information About the Employer This is my second coop with Value Connect. To get a general overview of what Value Connect is and what they do, check out the second section in my S21/F21 Work Term Report, or you can check out our website at valueconnect.ca. Compared to my last work term, the company has shifted it’s efforts towards bringing more lenders on to our platform. We’ve got multiple projects on the go, and much of my coop work term goals relate to the company’s efforts. We’re also growing at a notably faster rate than my last work term. The company is larger than it was before, and each position has a person who is remarkably stronger than the last. Job Description At Value Connect a Software Developer is expected to work on something software related that progresses the company in one way or another, raise the development teams knowledge base by sharing and/or implementing their ideas, learnings and findings and be receptive to feedback from the co-workers and business. The big difference in this work term compared to my last is my promotion from Junior Software Developer to Software Developer 🎉. My promotion meant an increase in responsibilities and a requirement to make important decisions related to the development team and company. It doesn’t feel like much had changed compared to my last position. I always tend to stick my head into important developer related decisions and maybe that’s why they kept me around for a third work-term. It’s definitely not an impeccable coding ability. Goals Update Spring Boot to Version 2 ✅ Over the course of my work term, upgrading our legacy web application through a new major version was my assigned wildly important goal and was single-handedly the biggest impact I could make on the business over the next four months. Spring Boot 2.0.0.RELEASE is the minimum version required to work with the Spring Authorization Server, which supports the standard OAuth 2.0 authentication protocol and allows us to implement Single Sign On (SSO) capabilities for our entire suite of tools. Admittedly, the update to Spring Boot 2.0.0.RELEASE should have happened much sooner. It’s a very daunting task, and that’s perhaps why it’s been put off for so long. To put it in perspective, Spring Boot is an opinionated library with a whole bunch of opinionated books. This library contains books that they want; no older, no newer. The problem is that this rule was lost in translation between all the changes to the codebase. Some teams (even mine, early on) naively inserted their own opinionated books over Spring Boot’s opinionated books. Unfortunately, our opinionated books are not compatible with newer versions of their opinion, and we’re now missing out on all the cool new things that their newer opinionated books provide. I originally thought that I completed this goal because of luck. When Chris (the CEO), Ben (the Lead Developer) and I went out for dinner to discuss the future of the web application, we originally estimated that the updates would take over 5000 man hours. This was the worst case, but it was the most reasonable. The most important thing that I learned from this work term goal is about planning for unplanned work. All too often business projects, internal IT projects, updates and changes are given the light of day when planning developer’s work. It feels good to give time to unplanned work and accomplish your goals on time. On-board new Developer ✅ At the start of my work term, our development team had shrunk to a meager three people, one of whom had put in his two weeks notice. It was important to get a new developer onto our team, and get them up to speeds so that we can continue the velocity that we’ve had previously. After a ton of searching, a software developer named Rafael gives a stellar interview and joins our team two weeks later. Interestingly, I think that this goal taught me more than I taught Rafael. When he would ask a question, it was easy to realize that what he was asking about was over-complicated, or didn’t have enough documentation surrounding it. Sometimes he would ask questions that I didn’t know the answer to, and we would figure it out together. He took a lot more in depth approach to learning our web application, and he would teach me things that I had no clue about. Moreover during code reviews, Rafael was quick to implement some design patterns that I’ve never seen before. Overall, Rafael is a great hire for our team. He’s been successfully on-boarded onto our team, and our velocity has improved since his hiring. Implement New Authorization Server ❌ As mentioned earlier, a new authorization server allows the company to implement Single Sign On (SSO) capabilities for our entire suite of tools. This means a single login for our web application, mobile application, and any future Value Connect service. This may not seem like a huge feat at first, but behind the scenes it means that we can tie any data to a single Value Connect account. If something goes wrong with a user’s purchase made using an Apple ID or Google Play Account, we can trace it to a single Value Connect user and better serve them that way. I failed to achieve this goal because of how long the Spring Boot 2 update took. In hindsight, this was an unreasonable goal. I set it naively thinking how great it would be to complete all these things in one work term, and how much I could help the company progress in doing so. If there is anything I can learn from this, I want to keep in mind SMART the next time I create goals. Conclusions I had more fun this work term than my last. The best thing about working for a small company is the impact that I’ve been able to make. Want to work with the latest and greatest technology? Want to create a development team social every week to catch up on non-work related topics? Are you interested in working more on development operations than developing the software? With a small company, you can be the change you want to see. Obtaining these things is as simple as talking to your supervisor, and compromising with the company’s needs. All three of those questions were something that I asked myself during my work term, and all three were answered with compromises that I was happy with. Acknowledgements A huge thanks to the success and sales teams for putting up with my team for a third of this year, and anybody else who was hindered by the web application’s updates but also saw the value and importance in them. Thanks to my team, Ben and Rafael, for continuing to spark my flame. I learned more this work term than I have previously, and that’s because of your willingness to share knowledge, provide constructive feedback and constantly push the limits of our team’s ability. You guys are awesome, it’s been a joy talking to you every day.

S21/F21 Work Term Report

“We should remember that good fortune often happens when opportunity meets with preparation.” - Thomas Edison. I’m very lucky to be in the position I’m in. But I think there is one consistent action I’ve done to aid my luck: seizing opportunities. Spotting an opportunity, bringing it to fruition and acting on it is something that I’ve gotten really good at. In fact, none of my goals this work term are related to my job tasks. Many of them are the result of seeing an opportunity and acting on it. As part of my first coop work term, I’ve been working full-time with Value Connect. Value Connect is a property appraisal marketplace that’s changing the way everyone feels about appraisals. Today’s property appraisals are mostly offline. They include frustrating back-and-forth and mundane data re-entry. We’re saving underwriters 85% of the average time spent on each appraisal by making them ridiculously simple and efficient. I chose Value Connect because I see opportunity. There is opportunity in getting to talk to my CEO every day, being an integral part of creating my team’s culture and making decisions for the foundation of the company. There is ample opportunity here, and I’m thrilled to be a part of it. Job Description I work on a lot of projects. At the beginning of my work term, my responsibility is to work alongside our designer to develop a replacement front-end for our appraisal order system. I work hard to integrate myself with the processes and workflows that my teammates use. In the following months, I work on the mobile inspection tool. An app that solves common issues and concerns of property appraisers. For the last two months, I work full-time on our web application. I couldn’t be happier with the refreshing shift in focus. While not my main focus, much of my job also includes enhancing development operations. My team and I have streamlined the process of pushing new features to production. I’m proud to say that I’m a part of fostering a culture that does not need any tools, frameworks or languages to get started. Many of the skills required to start at Value Connect include an appetite for learning, an eye for opportunity and an optimistic attitude. Given that this is my first work term, it’s awesome being able to start with no industry knowledge, but be able to work on as many projects as I have. Summer 2021 Goals Create a Pull Request for an Issue Related to the Back-End ✅ Technology: Java, Spring, Spring Security, Cross Site Request Forgery. Skills: Initiative, Problem Solving. While I work on the new front-end, the rest of the team tackles the back-end. The work that I am doing may make an impact months from the time that I write it. I see back-end bug fixes move into production every other day. I want to make an impact on the company as soon as possible, and the fastest way to do that is by working on an issue related to the back-end. I see my first opportunity when working on the mobile inspection tool. There is an issue when making a request to register a mobile appraiser. I spend a couple hours figuring out the best way to solve the problem by studying how the system handles web requests. I learn a lot about professional Java code and the Spring framework, but it’s not enough to solve the issue. I employ the help of a co-worker who points me in the direction of a Spring project called Spring Security. It’s here that I learn about Cross Site Request Forgery(XSRF) and how attackers can leverage data from a different browser window to make a web request on the user’s behalf. Spring Security enables XSRF protections by default for each web request. Fortunately, XSRF is only really possible on a web browser. I make a pull request that disables XSRF protections for our mobile requests. My first goal is complete. Add a Linter to the Front-End Build Process ✅ Technology: ESLint, BASH, Pipelines. Skills: Creativity. Git has a cool feature that shows who wrote each line of code and how long ago it was changed. In Value Connect’s code base, some lines have not been changed in five years. It’s awesome that developers can visibly see who implemented what infrastructure. How can I create infrastructure that other developers will see? console.log(';)'); You, 2 months ago - commit message here Thinking about where I can create some infrastructure, I come across code formatting. ESLint is a tool that automatically formats code and warns of language anti-patterns. Enforcing the same format makes the code base easier to read. Imagine a textbook that has different writing style throughout. Additionally, the less time developers spend reading, the more time they can spend writing. Implementing a linter on my current project would be a great idea! I complete this goal by creating a linting configuration file that tells ESLint how I want the code to be formatted. I carefully configure rules and include why rules are enforced. I want future Value Connect developers to make informed decisions on code enforcement. I write a BASH script that checks for linting errors before committing code to the code base. Lastly, I include a step in the pipeline that checks whether code is linted or not. The pull request takes 6 iterations before it’s finally committed to the code base and is there forever. Have a Cleaner Commit History ✅ Technology: Git. Skills: Communication, Writing. 20% of my time reading code is glancing over the lines that my predecessor’s wrote. The other 80% is looking at the commit history wondering what on earth they were thinking. Commit history should accompany code changes with why a change is being made It’s clear to me that nobody always gets their commit history right the first time. In fact, getting commit history wrong is so common that Git provides a suite of tools to edit commit history. Over the next couple months, I spend extra time ensuring that I leave a good commit history. Here is a list of tools that I learned to complete this goal: reset Undoes commits. Particularly useful when I miss code that needs to be in my last commit, or when I include code that should not have been in the last commit. alias I use Git aliases to visually show code differences on the screen that I make commit messages. Now I can see exactly what I’m committing as I’m writing the commit message. update content styles inline and block code now look more similar # Please enter the commit message for your changes. Lines starting # with '#' will be ignored, and an empty message aborts the commit. # # On branch documentation-page # Your branch is up to date with 'origin/documentation-page'. # # Changes to be committed: # modified: assets/css/content.css # # Changes not staged for commit: # modified: content/coop_work_reports/valueconnect.md # modified: pages/documentation/_section/_slug.vue # # ------------------------ >8 ------------------------ # Do not modify or remove the line above. # Everything below it will be ignored. diff --git a/assets/css/content.css b/assets/css/content.css index c0f8fe6..67ad0ed 100644 --- a/assets/css/content.css +++ b/assets/css/content.css @@ -36,10 +36,13 @@ font-size: 1.05em; } -.nuxt-content code { +.nuxt-content code, .nuxt-content pre code span { font-size: 1rem; - background-color: var(--codeBackground); + background-color: #f5f2f0; + font-family: 'JetBrains'; +} + +.nuxt-content p code { padding: 3px; - border-radius: 5px; } rebase Allows you to manually edit commit history. It includes a lot of features like changing commit messages and adding or removing code from commits. patch An interactive menu for selecting which code to add to the next commit. As opposed to conventionally committing entire files. Fall 2021 Goals Add Linting to the Mobile App Pipeline ⚠️ Technology: ESLint, BASH, Pipelines. Skills: Planning, Problem Solving. In April, I begin working on the mobile app. Linting the front-end was very successful, and I can see that our mobile app code base could really use the same love. However, there were a couple issues with linting the front-end code base that I hope to remedy here. While linting the front-end code base, I continually played catch up with new code or code that had been out before my linting pull request got merged. I had to keep linting new pull requests that did not contain my linting configuration until the entire code base was linted. During this work term, I’m taking Systems Programming (CIS*3050) which has a whole unit on BASH scripting. I take this opportunity to write a simple bash script that checks if changed files are linted. Instead of keeping track of whether every one’s pull requests are linted, the pipeline runs the script and checks this for me. It will take longer for the entire code base to be linted, but there is a lot less manual work. This goal is currently in progress. There is a pull request out for it that is waiting to be reviewed. Successfully set up Logging in the Mobile App ❌ Technology: Sentry. Skills: Creativity, Organization. My team hosts a book club where we read Clean Code by Robert Martin to help develop our skills at work. In this book, I learn about wrapping third party code with your own code. The most fascinating reason why you should do this, is because if you ever need to replace third-party code (sometimes libraries get deprecated, or the library is not performing as well as one wants), you only need to change the file that wraps the third-party code. Better yet, you know exactly what the new library you’re implementing needs to do because the functions that wrapped the last library give you context! const cleanPreviousTransactions = () => { if (Platform.OS === 'android') { RNIap.flushFailedPurchasesCachedAsPendingAndroid(); } else if (Platform.OS === 'ios') { RNIap.clearTransactionIOS(); } }; I really want to try this out with with the mobile app. We’re currently using a library called Sentry. It’s a powerful logging solution, but it’s not properly set up and our implementation is all over the place. My plan for completing this goal goes like this: See where we can improve on our current use of Sentry. Write some documentation to standardize the use of logging. Create a wrapper that wraps the Sentry library calls using the documentation I wrote. Start ups are very fast paced. Before I knew it, I get moved to a new project and am no longer working on the mobile app. I am unsuccessful in completing this goal. Add Integration Tests to the Mobile App ❌ Technology: Detox, Pipelines, React Native. Skills: Initiative. There are already a fair amount of unit tests in the mobile app, but there are no integration tests. Time and time again, we’re about to release a new version of the app, but someone finds an issue in an obscure area. The culprit? A code change on a completely different side of the app. Unit tests are great for testing isolated parts of the code base. Integration tests ensure isolated parts are working in cohesion with each other. The next step to catching bugs in the mobile app is to add integrations tests. Once I was browsing some open source code to check out how they do things in React Native. I came across this cool integration test library called Detox. It looks awesome, has pipeline capabilities, and works with React Native. As with the last goal, I am unable to complete this goal. I see it as a side-effect of my opportunity to work in this field. I’m not going to waste time dwelling on my failure, rather I’ve already developed goals for the new project that I’m working on. Conclusion This work term exceeds my expectations. I expect to be a junior developer completing tasks for those more experienced than me. Instead, I learn that it’s not about how long you’ve been in the industry, but about seizing opportunities. Developers can go their entire career without touching a linting configuration. But I see an opportunity, and create one. Now, I’m the main guy on the team when it comes to linting questions. I see many instances where someone acts on an opportunity and is now the most knowledgeable on the team when it comes to that area. Before this work term, I would have said that I’m in coop to get a head start on industry experience before I graduate. After completing my first work term, I’m in coop because of the information that I learn about the industry before I graduate. The type of information that you can only obtain by working in the industry and keeping an open eye. I would not have learned how crucial it is to seize opportunities or how to provide value to your team without experience. That’s the most important part of my work term, and of coop in general. I’m excited to take what I’ve learned during my work term to my academics. I already realize how many opportunities I have to grow. I can’t wait to improve more in my next work term. Acknowledgements Throughout my work term, I received a great deal of support. I cannot begin to express my thanks to my supervisor, Ben Pearo, who gave me my first opportunity by reaching out during his hiring process. You allow me to turn my spark into a flame. It’s amazing to see how much you have grown in a managerial role. I want to extend my gratitude to my colleagues at Value Connect for their patience and practical suggestions. In particular, I would like to thank my CEO, Chris Bisson, for his astonishing amount of patience and charitability. You all work so hard, and you create an inordinate amount of opportunities for the development team. Special thanks to my girlfriend, who has been instrumental to my success at work. You’ve supported me in multiple ways. Your drives and unconditional patience will forever be appreciated. I’d like to recognize the assistance of you, the reader. If you’re reading this, you most-likely improve my life one way or another. Whether you have a hand in the coop program, or a friend who contributes to my well-being, thank you.