Development
June 6, 2022

Impactful LookML Code Reviews

Headshot of Spectacles co-founder Josh Temple
Josh Temple
Co-Founder, Spectacles
Two developers reviewing code together in front of a computer.

If you're working on a team where more than one person writes LookML, you should ensure another developer reviews each change before you merge it into production.

Code reviews are essential for maintaining code quality and catching potential errors. By reviewing code changes, you can ensure that production code is maintainable, scalable, and accurate.

Code reviews aren't always all rosy, though!

Perhaps you've worked on teams where code review was only a formality—a rubberstamping process that rarely caught real issues. Or maybe you've been on the receiving end of a pedantic review from a senior developer with lots of annoying requests about formatting and spacing but no meaningful feedback.

We can do better! Instead, impactful LookML code reviews should be

  • Fast - developers shouldn't spend all day reviewing code!
  • Empathetic - consider the human on the other side of the request!
  • Educational - code reviews should make everyone smarter & better at their job!

How can we make reviews a good use of time for your team with these goals in mind? We'll put ourselves in the seat of a LookML code reviewer and consider some guiding principles for impactful reviews.

Unique challenges to reviewing LookML

Developers have written volumes about generic principles for code reviews, and I encourage you to read Philip Hauer's article Code Review Guidelines for Humans and Google's and StackOverflow's standards. However, LookML isn't Javascript or Python. So what makes reviewing LookML uniquely challenging?

  1. Dependency on the database. LookML contains a lot of SQL. And it's hard to review SQL thoroughly unless you're going the extra mile to check SQL dialect syntax and table schemas. Looker doesn't test for SQL errors, which places a large burden on the reviewer to ensure dimensions and joins will work in the warehouse. Thankfully, you can automate these "will it run" tests with Spectacles, saving you time to focus on topics like readability and architecture.
  2. Less flow. LookML is closer to a markup language like YAML than a programming language like Python. It doesn't have if/else statements or function arguments and return values. This means that LookML is flatter and can be less organized. Extending and refinements help a little here, but reviewing LookML typically looks like reading views or models from start to finish—dimension-by-dimension or explore-by-explore.
  3. Long files are commonplace. Since LookML views are built from database table schemas, they can easily span hundreds if not thousands of lines depending on the number of columns in the source table. Derived table queries make files even longer. Sometimes these files are autogenerated by Looker or an external script. It can be daunting to receive a code review request with thousands of lines changed.
  4. Lots of user-facing information. Most dimensions and measures are used in Explores. This means you must review user-facing details like descriptions, labels, and suggestions for readability and consistency.

Automate the boring stuff

Because LookML files are often long but user-facing, teams create a lot of style conventions to try and keep things tidy and readable. Unfortunately, an unintended effect of these conventions is that reviewers spend way too much time giving stylistic feedback like:

  • This dimension should have a description
  • This view doesn't define a label
  • The name of this dimension doesn't follow our naming convention
  • These measures should be in alphabetical order

What a waste of a skilled analyst's time! Adding a style checker (also known as a linter) to your code review process or continuous integration pipeline automates this feedback, so your developers don't have to feel like human spell checkers.

At Spectacles, we're building our own style checker, lkmlstyle, which supports an extensive ruleset with lots of options for customization and configuration. It will be released in January 2023. LAMS is another popular style checker in the Looker developer community. What matter is that you use something to reduce human time spent checking style conventions.

If you're using GitHub, you should consider adding a CODEOWNERS file. CODEOWNERS allows you to designate certain GitHub users as reviewers for all or parts of your repository. Then, when a Pull Request changes files in your repository, GitHub will automatically add those people as reviewers.

CODEOWNERS allows you to set up ownership over different parts of your Looker project and automatically tag the correct reviewers. If separate teams maintain their own views, you can be sure they'll always review changes that affect the views they own.

The impactful LookML code review

Once you have automation out of the way, here are some questions, as the reviewer, you should ask yourself about the change:

Do I understand the proposed change?

You should understand what the author wants the code change to achieve. The author should provide this context with the change request's title, description, and references to related issues or tickets. If they don't, ask for it.

If your team is struggling with inconsistent documentation of changes, you can also use a Pull Request template to prompt authors to document their changes thoroughly. Our friends at Brooklyn Data Co. have an excellent example for LookML that you could start with and customize for your needs.

If the change is complex, like a long derived table definition or a complicated measure, ask for some comments to ensure future readers will also understand the code.

If the change is too large or trying to accomplish too many things at once, it's a sign that the author should break it up into more manageable changes. A code change that's too big is more likely to contain errors and takes longer to review. Remember, we want code review to be fast and not tedious!

Does the code accomplish what it's supposed to?

Once you understand the purpose of the change, you'll want to check that the code is accomplishing what it set out to do. For example, if you're reviewing a LookML change that's supposed to fix a production issue, check that the code fixes it and doesn't introduce any new problems.

If you can't tell by reading the code, you can ask for a test Explore query or report that includes the change.

Are these changes clear to Explore users?

Most code changes in LookML will be experienced by end-users in Explores. Here are some questions you can ask:

  • Does user-facing help text (descriptions, labels) make sense to someone without the technical context?
  • Are measures explained well? Did the author provide information to the user about how the measure is calculated and when to use it?
  • Do measures define sensible drill fields for exploration?
  • Are unnecessary fields and Explores hidden?

Reviewing code through the lens of a Looker user ensures these changes won't go to waste—people will be able to use new fields and Explores to answer their business questions!

Giving feedback

Answering the questions above will likely give you some feedback for the author. I previously referenced Philip Hauer's article, Code Review for Humans, which has thoughtful tips on giving empathetic and educational feedback. When giving feedback, remember to ask yourself, "Is it kind? Is it necessary? Is it true?"

At Spectacles, we like to use GitHub suggestions, which make it easy for the author to accept small, proposed changes to the code from the reviewer.

By asking questions about the code and evaluating the change in the larger project's context, you'll help your LookML developers write code that is clear to Explore users and accomplishes its purpose.

Giving feedback can be difficult, but following these simple guidelines will make it smoother and beneficial for everyone involved.

Have other ideas or best practices for LookML code review? Let us know on Twitter, @SpectaclesCI.