Mar 21

How to Implement the Automerge feature that is missing from BitBucket cloud

Recently I was involved in a change over from bitbucket server to bitbucket cloud. It was pretty seemless, but there was one major issue – automerge, which is available on bitbucket server, is not available on bitbucket cloud. For many people, maybe that’s not a big deal, but for us it definitely felt like a step back. We had to do TWO pull requests anytime we did a release, one to master from the release branch and one from master back into the core development branch. In the event of a hotfix… yep, two pull requests there too. In a world of automation being used to make life simpler and to make sure the key steps are done, not having automerge just felt wrong to me and all these other people. So I decided to implement a solution and make it as generic as I could so you can use it for whatever projects you have. I chose to utilize bitbucket pipelines as the driver for this feature. If you haven’t used pipelines you are encouraged to check out their intro here. Let’s check out the solution.

First I describe a more “ideal way”. But I also describe a modified way to implement the same pipeline. The modified way doesn’t require the “bot”… but it’s not as nice. However, it will allow a small group to get things going quick.

Ideal way:

  1. Create a “bot” user – this is really just a dummy user that you will give lots of permissions to so it can do stuff on your behalf.
  2. In the repo, Go to pipelines -> settings
  3. Turn on pipelines.
  4. Go to pipelines -> SSH Keys and generate a set of keys
  5. Copy the public key and go paste it into the bots profile in the ssh keys section.
  6. Go to the repo you want to configure.
  7. Choose the development branch
  8. Give the bot permissions to merge into that branch without a pull request.

Slightly modified way:
If you decide not to go the bot route, you still need to do everything else. You just place the tie the SSH keys you generate into your own user instead. And as long as you have sufficient permissions it will work just fine.

Now that you have pipelines enabled let’s define a simple pipeline that takes whatever is in “master” and merges it to “develop”. The easiest way to it is probably just to copy this one.

# Merges master into development anytime master is updated 
# Is a bit of hack needed because bitbucket cloud does not, at the time of this creation. 
# support automatic merges in their cloud product.
# Built with lots of trial and error. No promises.
# Useful info: https://confluence.atlassian.com/bitbucket/variables-in-pipelines-794502608.html
# More guides at https://confluence.atlassian.com/x/5Q4SMw for more examples.
# Only use spaces to indent your .yml configuration.
# I tried to use variables where possible to keep it generic
# Author: Josh Ahlstrom
# Initial Creation Date: 2019-03-19

# NOTE: Pay no attention to the image utilized below. I chose dotnet, but it probably doesn't matter
#       which one you choose since it is script which means we'll be working "from the shell" anyway.
# -----
image: microsoft/dotnet:sdk

pipelines:
  branches:
    master:
      - step:
         script:
          - apt-get update -y
          - apt-get install git -y
          - mkdir repos
          - cd repos
          # full name variable used to make things generic
          - git clone git@bitbucket.org:$BITBUCKET_REPO_FULL_NAME
          - cd $BITBUCKET_REPO_SLUG
          # variables for git info such as "$GIT_USERNAME" "$GIT_EMAIL"
          - git config user.name "$GITBOT_USERNAME"
          - git config user.email "$GITBOT_EMAIL"
          - git checkout develop
          - git merge master
          - git push

Alright, so now let’s talk about what this does. Keep in mind that a pipeline can be thought of as running in it’s own little linux VM. So you’ve got quite a bit of freedom in what you do.

A quick word of caution: pipelines appear to use “build” minutes. I think they should be called pipeline minutes because you don’t actually have to build in a pipeline, but I digress. If I read it correctly, most accounts come with 50 minutes for free. I ran it a BUNCH of times times getting these scripts set up (maybe 20 times or so) which only totaled about 5 minutes. Anyway, if you have limited minutes that’d you’d prefer to use otherwise then this might not be the way to implement automerge for you – but maybe it’ll give you some other ideas.

Here goes. I’ll show some script and then describe what it does for the whole script.

image: microsoft/dotnet:sdk

First, we tell the pipeline that it’s going to run under an image that knows all about dotnet. There is a docker one, a javascript one… lots of options. I don’t think it matters (as noted in the snippit) which one you use because we’re just doing basic shell commands. It appears that all the images can handle that much.

pipelines:
  branches:
    master:

You can apply different pipelines to different branches, in this case we just want to apply this pipeline to the master branch.

      - step:
         script:

The first (and only) step in our master branch pipeline is to run a script. This just runs it right on the command line BUT has access to certain variables that you can set up in bitbucket, either at the account level or the repo/pipeline level. Some are also global to bitbucket. More on variables later.

          - apt-get update -y
          - apt-get install git -y

The first two lines of the script just install git.

          - mkdir repos
          - cd repos

The next two lines create a directory for us to clone our repo into and move us into that directory.

          # full name variable used to make things generic
          - git clone git@bitbucket.org:$BITBUCKET_REPO_FULL_NAME

The next two lines (including one comment line) actually clone the repo. We use a bitbucket variable here to make things generic and simple to use on multiple repos. The first variable, $BITBUCKET_REPO_FULL_NAME, returns the full name of the repo including the account number.

          - cd $BITBUCKET_REPO_SLUG

Then we “cd” into the repo directory. We use another bitbucket provided variable here, $BITBUCKET_REPO_SLUG. It’s like the shortname of the repo. When you clone a repo from bitbucket and then you look at the directory name it was cloned into… that’s what the value of  $BITBUCKET_REPO_SLUG is.

         # variables for git info such as "$GIT_USERNAME" "$GIT_EMAIL"
          - git config user.name "$GITBOT_USERNAME"
          - git config user.email "$GITBOT_EMAIL"

The next three lines set up the pipeline’s VM git with some made up name and email. This is just because our next step will want to know them.

          - git checkout develop
          - git merge master
          - git push

The last three lines checkout the branch we want to merge into, merge master into it, then then push that branch back up to bitbucket.

We’re almost done!

You might have noticed I used a couple of variables $GITBOT_USERNAME and $GITBOT_EMAIL. Those two I had to create myself. Again, ideally this would just be the username of your real “bot” but even if it is you would want to store it in a variable. You’d just store it in an account variable instead of a repo one. In any case, to set up the variables in the repository you just go to the repository -> settings -> “scroll down” to pipelines -> “then choose” repository variables. Read the little blurb at the top of that page to see how variables work. Basically, don’t include the “$” in your variable name – you only need that to access your variable in scripts.

Finally, to make this work, the “bot” user (or yourself) will need permission to merge into the TARGET branch directly (ie. without a pull request) in order for all of this to work. So make sure you give whichever “bot” / user that got the SSH key you generated earlier permissions to merge directly. You do that in repository -> settings -> branch permissions.

And that’s it! Place this script into a file called bitbucket-pipelines.yml in the top level of your repo and you’re done!

Now when you merge into master, the pipeline should kick off and your automerge should happen as planned!

I hope you found this useful. If you have ideas on how to improve this please feel free to comment!

The seed that spawned this idea came from a similar solution to a similar problem presented here.

Jan 26

Docker HyperV and Android Emulator HOWTO

So, I want Docker and my Android Emulator working at the same time on Windows 10… it wasn’t simple to figure out. Eventually I came across the solution of using a hyper-v hosted instance of android and connecting the android studio to it for debugging. Works great! Here’s the write up I found. It’s a little bit dated so the load instructions are quite perfect, but they’re pretty darn close!

Using Android-X86 as an Emulator in Hyper-V for Windows

Worth mentioning, as of the time of this writing the way to get the IP from within android was ‘ipconfig’ instead of ‘netcfg’. And the location of the ‘adb’ tool was something like the following:
C:\Users\[user]\AppData\Local\Android\sdk\platform-tools

Nov 17

Ebooks from Markdown

The last ebook I published was six years ago and a lot about the way I think I would do it has changed. Primarily I think I would change from writing it in word to writing it in Markdown and then converting it to other formats as needed.

Why Write Ebook In Markdown

Well there are several reasons why I like the idea of it better.

  1. Plain text editor means write anywhere and still be able to format.
  2. Plain text editor means no dependance on propritary software.
  3. Plain text means easy for using git for version tracking. Big deal here.
  4. Markdown means splitting into multiple files easily – for example by chapter.
  5. Markdown means one master format that can be converted to all the others as needed.

Converting my Original Word Doc to Markdown

The original of my book was in doc (written in Microsoft Word). But I was able to convert it seemlessly to markdown with just the following three steps:

  1. $ sudo apt-get install pandoc
  2. If it was actually doc, then open the file in word or open office and “save as” docx.
  3. copy the docx file to whatever directory you want your book files to be located
  4. $ pandoc –extract-media . originalfile.docx -o output.md
  5. NOTE: The “.” after –extract-media and before originalfile is the basepath you want the media extracted to.A “media” folder is automatically created at that location and the files are extracted into that media folder.

References

Some sources that seem useful. I will probably convert this to a references section as a fill the body of this article out over time, but for now you’ll have to get there yourself to get the info!

  1. http://www.gabrielgambetta.com/tgl_open_source.html
  2. https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet (Markdown cheetsheat)
  3. https://medium.com/@davidgrophland/making-an-ebook-from-markdown-to-kindle-cf224326b1a2
  4. https://pandoc.org/ (software from converting from Markedown to other formats)
  5. https://ebooks.stackexchange.com/questions/65/is-markdown-a-viable-source-format-for-writing-ebooks
  6. https://garyhall.org.uk/create-ebook-command-line.html
Jul 31

Fix Your Error Messages Before You Fix Your Errors

I work in software development. I’m a “Senior Software Project Engineer” which means that I work with other people to define their needs (and wants) and then lead a team that design, architects, and implements a solution. As I’ve moved up over the years I’ve worked with a bunch of different people many of whom are experts in their area. Recently, I got a little bit of verbal lashing from one of these people who is the head of IT Operations. While the person almost never says anything nicely they do almost always end up being right so I try to ignore almost everything they say and instead try to get the message, because, like I said, they are actually almost always right and they’re really good. So what were they right about this time?

Our logging sucks. We log all kinds of stuff. Some of the errors we log are legit. But many, if not most, are total garbage. Some of the “errors” are not really errors (that’s another blog post). But what about those that are truly errors? What could be wrong with logging them?

Imagine an error that says “Fatal error during processing.”

Ok. Now what in the work does that mean?  And what should I do about it? Can I just rerun processing? Do I have to do some kind of clean up first? Should I report it to someone? Was the problem related to the software logic, to the environement (disk space, network down, etc), to the input? What the hell am I, guy who’s job it is to make sure work gets done, supposed to do with that error? I suppose I’ll probably just try to run it again and cross my fingers, right? Well I’m only doing that until someone’s software doesn’t clean up after itself [that’s also another blog post) and causes havoc by being restarted… then I’m out of the business of trying to be helpful and I’m in the business of complaining.

Let’s re-imagine that same error now says “Fatal error during processing – insufficient disk space available”. That’s better, right?! Sure it is. I’d much rather have that! But I still haven’t answered over half of this operator’s questions. Can I just rerun? Do I have to clean up some runtime data first? I t’s better but not really completely helpful.

Trying again. “Fatal error during processing – insufficient disk space available. Process requires atleast 1GB available disk space on volume /server/vol1 to run. Create necessary disk space and rerun.’

Now we’re cooking! All the information anybody needs to have is there in the log. Happy Ops people! And honestly, if you’re DevOps, you probably care about this even more because it’ll be you trying to figure out what went wrong. Logs are important, good logs are a godsend.

So next time you’re fixing an issue and you’re digging through code to try to find the cause of a problem – that you only know about because of a log – keep in mind that most of that digging could be avoided by better log messages. Take the time to update your log messages while you’re in there rather than just fixing the bug. You’ll be glad you did and you’ll make life better for you and for someone else!

Jul 26

VB enums default to 0 if they are Nothing

I was writing a test case today and when doing the first step – “make a failing test” – I was having problem… the test kept passing even though I knew it shouldn’t. Eventually I figured out the reason is was passing is because I didn’t understand that an Enum will default to 0 if is actually Nothing.

I was testing a factory that had the very simple job of producing a logger depending on the type of logger that was requested. Here’s the factory code:

Public Shared Function MakeLogger(ByVal logmode As LoggingMode) As I_CustomErrorLogger
   If logmode = LoggingMode.DATABASE Then
      Return New CustomLoggerDBEnabled()
   ElseIf logmode = LoggingMode.DEVNULL Then
      Return New CustomLoggerDevNULL()
   ElseIf logmode = LoggingMode.STANDARD_IO Then
      Return New CustomLoggerStandardIO()
   Else
      Return New CustomLoggerStandardIO() ' Assume old way if not told
   End If
End Function

And, as you can see, it takes an object “LoggingMode” which is an enum.

My thought was, for my FAILING test I’ll pass Nothing (this is the vb equivalent of null) as the parameter and I can then check to make sure that the type returned is something OTHER than StandardIO (because the default would be StandardIO). Remember I’m trying to make a failing test so I asserted that the returned logger was of type CustomLoggerDBEnabled.

But it passed! That’s not right I said and reran. Then recompiled. Then restarted visual studio literally thinking something must have gotten cached and the test wasn’t rebuilding or something crazy like that. Then it hit me, maybe null behaves weird with enums. So I debugged it and sure enough the value I passed in as Nothing was being view is 0 in the debugger.

If that’s not bad enough, my enums were “auto enums” (I didn’t define a value for them) and so they started at 0, meaning that the first item in my enum was more or less equal to zero, the next equal to one and so on. As it turns out DATABASE was the first item in my LoggingMode enum and therefore it got the value 0. So in my factory, if it was sent Nothing for the enum it would understand that as 0 and therefore translate it to DATABASE when it came to enum comparisons. All of that happening at the same time, and the fact that I just happened to choose DATABASE  as my FAILING condition made the test pass.

The fix was to update my enums to hardcode the value. so instead of:

Public Enum LoggingMode
   DATABASE
   DEVNULL
   STANDARD_IO
End Enum

I did the following:

Public Enum LoggingMode
   ' Today I learned that sending Nothing as a parameter when an Enum is expected defaults the value to '0'
   ' This means that the first item in the list of enums will be selected.
   ' So If we want to send 'Nothing' and have it not match anything in the enums list then we 
   ' have to specify the enum values. This is poop, but it is what it is.
   DATABASE = 1
   DEVNULL = 2
   STANDARD_IO = 3
End Enum

Now, since I don’t have a value mapped to 0 the Nothing enum will fall all the way to my final else statement and give back the default logger I actually want.

To close this up let me point something not quite so obvious out. Besides the in your face lesson of Enums that are nothing are really 0 there’s something else valuable to take away. Unit Testing works to make better code. You shouldn’t skip it. It finds weird little cases more than I’d like to admit. It just so happens that this one gave me an opportunity to learn a little unrealized nugget about the language rather than the implementation. But it has value! Do it!