Mar 21

How to Implement the Automerge feature that is missing from BitBucket cloud

Recently I was involved in a change over from bitbucket server to bitbucket cloud. It was pretty seemless, but there was one major issue – automerge, which is available on bitbucket server, is not available on bitbucket cloud. For many people, maybe that’s not a big deal, but for us it definitely felt like a step back. We had to do TWO pull requests anytime we did a release, one to master from the release branch and one from master back into the core development branch. In the event of a hotfix… yep, two pull requests there too. In a world of automation being used to make life simpler and to make sure the key steps are done, not having automerge just felt wrong to me and all these other people. So I decided to implement a solution and make it as generic as I could so you can use it for whatever projects you have. I chose to utilize bitbucket pipelines as the driver for this feature. If you haven’t used pipelines you are encouraged to check out their intro here. Let’s check out the solution.

First I describe a more “ideal way”. But I also describe a modified way to implement the same pipeline. The modified way doesn’t require the “bot”… but it’s not as nice. However, it will allow a small group to get things going quick.

Ideal way:

  1. Create a “bot” user – this is really just a dummy user that you will give lots of permissions to so it can do stuff on your behalf.
  2. In the repo, Go to pipelines -> settings
  3. Turn on pipelines.
  4. Go to pipelines -> SSH Keys and generate a set of keys
  5. Copy the public key and go paste it into the bots profile in the ssh keys section.
  6. Go to the repo you want to configure.
  7. Choose the development branch
  8. Give the bot permissions to merge into that branch without a pull request.

Slightly modified way:
If you decide not to go the bot route, you still need to do everything else. You just place the tie the SSH keys you generate into your own user instead. And as long as you have sufficient permissions it will work just fine.

Now that you have pipelines enabled let’s define a simple pipeline that takes whatever is in “master” and merges it to “develop”. The easiest way to it is probably just to copy this one.

# Merges master into development anytime master is updated 
# Is a bit of hack needed because bitbucket cloud does not, at the time of this creation. 
# support automatic merges in their cloud product.
# Built with lots of trial and error. No promises.
# Useful info:
# More guides at for more examples.
# Only use spaces to indent your .yml configuration.
# I tried to use variables where possible to keep it generic
# Author: Josh Ahlstrom
# Initial Creation Date: 2019-03-19

# NOTE: Pay no attention to the image utilized below. I chose dotnet, but it probably doesn't matter
#       which one you choose since it is script which means we'll be working "from the shell" anyway.
# -----
image: microsoft/dotnet:sdk

      - step:
          - apt-get update -y
          - apt-get install git -y
          - mkdir repos
          - cd repos
          # full name variable used to make things generic
          - git clone$BITBUCKET_REPO_FULL_NAME
          - cd $BITBUCKET_REPO_SLUG
          # variables for git info such as "$GIT_USERNAME" "$GIT_EMAIL"
          - git config "$GITBOT_USERNAME"
          - git config "$GITBOT_EMAIL"
          - git checkout develop
          - git merge master
          - git push

Alright, so now let’s talk about what this does. Keep in mind that a pipeline can be thought of as running in it’s own little linux VM. So you’ve got quite a bit of freedom in what you do.

A quick word of caution: pipelines appear to use “build” minutes. I think they should be called pipeline minutes because you don’t actually have to build in a pipeline, but I digress. If I read it correctly, most accounts come with 50 minutes for free. I ran it a BUNCH of times times getting these scripts set up (maybe 20 times or so) which only totaled about 5 minutes. Anyway, if you have limited minutes that’d you’d prefer to use otherwise then this might not be the way to implement automerge for you – but maybe it’ll give you some other ideas.

Here goes. I’ll show some script and then describe what it does for the whole script.

image: microsoft/dotnet:sdk

First, we tell the pipeline that it’s going to run under an image that knows all about dotnet. There is a docker one, a javascript one… lots of options. I don’t think it matters (as noted in the snippit) which one you use because we’re just doing basic shell commands. It appears that all the images can handle that much.


You can apply different pipelines to different branches, in this case we just want to apply this pipeline to the master branch.

      - step:

The first (and only) step in our master branch pipeline is to run a script. This just runs it right on the command line BUT has access to certain variables that you can set up in bitbucket, either at the account level or the repo/pipeline level. Some are also global to bitbucket. More on variables later.

          - apt-get update -y
          - apt-get install git -y

The first two lines of the script just install git.

          - mkdir repos
          - cd repos

The next two lines create a directory for us to clone our repo into and move us into that directory.

          # full name variable used to make things generic
          - git clone$BITBUCKET_REPO_FULL_NAME

The next two lines (including one comment line) actually clone the repo. We use a bitbucket variable here to make things generic and simple to use on multiple repos. The first variable, $BITBUCKET_REPO_FULL_NAME, returns the full name of the repo including the account number.

          - cd $BITBUCKET_REPO_SLUG

Then we “cd” into the repo directory. We use another bitbucket provided variable here, $BITBUCKET_REPO_SLUG. It’s like the shortname of the repo. When you clone a repo from bitbucket and then you look at the directory name it was cloned into… that’s what the value of  $BITBUCKET_REPO_SLUG is.

         # variables for git info such as "$GIT_USERNAME" "$GIT_EMAIL"
          - git config "$GITBOT_USERNAME"
          - git config "$GITBOT_EMAIL"

The next three lines set up the pipeline’s VM git with some made up name and email. This is just because our next step will want to know them.

          - git checkout develop
          - git merge master
          - git push

The last three lines checkout the branch we want to merge into, merge master into it, then then push that branch back up to bitbucket.

We’re almost done!

You might have noticed I used a couple of variables $GITBOT_USERNAME and $GITBOT_EMAIL. Those two I had to create myself. Again, ideally this would just be the username of your real “bot” but even if it is you would want to store it in a variable. You’d just store it in an account variable instead of a repo one. In any case, to set up the variables in the repository you just go to the repository -> settings -> “scroll down” to pipelines -> “then choose” repository variables. Read the little blurb at the top of that page to see how variables work. Basically, don’t include the “$” in your variable name – you only need that to access your variable in scripts.

Finally, to make this work, the “bot” user (or yourself) will need permission to merge into the TARGET branch directly (ie. without a pull request) in order for all of this to work. So make sure you give whichever “bot” / user that got the SSH key you generated earlier permissions to merge directly. You do that in repository -> settings -> branch permissions.

And that’s it! Place this script into a file called bitbucket-pipelines.yml in the top level of your repo and you’re done!

Now when you merge into master, the pipeline should kick off and your automerge should happen as planned!

I hope you found this useful. If you have ideas on how to improve this please feel free to comment!

The seed that spawned this idea came from a similar solution to a similar problem presented here.

Jan 26

Docker HyperV and Android Emulator HOWTO

So, I want Docker and my Android Emulator working at the same time on Windows 10… it wasn’t simple to figure out. Eventually I came across the solution of using a hyper-v hosted instance of android and connecting the android studio to it for debugging. Works great! Here’s the write up I found. It’s a little bit dated so the load instructions are quite perfect, but they’re pretty darn close!

Using Android-X86 as an Emulator in Hyper-V for Windows

Worth mentioning, as of the time of this writing the way to get the IP from within android was ‘ipconfig’ instead of ‘netcfg’. And the location of the ‘adb’ tool was something like the following:

Nov 17

Part of the Story of all of Our Lives

Our lives are complicated. But we all have something in common! We all make mistakes and sometimes fail to learn from them. This is that story, simply yet fully told. May we all eventually find another street.

Autobiography in Five Chapters

by Portia Nelson


I walk down the street.
There is a deep hole in the sidewalk
I fall in.
I am lost...
I am hopeless.
It isn't my fault.
It takes forever to find a way out.


I walk down the same street.
There is a deep hole in the sidewalk.
I pretend I don't see it.
I fall in again.
I can't believe I'm in the same place.
But it isn't my fault.
It still takes a long time to get out.


I walk down the same street.
There is a deep hole in the sidewalk.
I see it is there.
I still fall's a habit
My eyes are open; I know where I am;
It is my fault.
I get out immediately.


I walk down the same street.
There is a deep hole in the sidewalk.
I walk around it.


I walk down another street.
Nov 17

Ebooks from Markdown

The last ebook I published was six years ago and a lot about the way I think I would do it has changed. Primarily I think I would change from writing it in word to writing it in Markdown and then converting it to other formats as needed.

Why Write Ebook In Markdown

Well there are several reasons why I like the idea of it better.

  1. Plain text editor means write anywhere and still be able to format.
  2. Plain text editor means no dependance on propritary software.
  3. Plain text means easy for using git for version tracking. Big deal here.
  4. Markdown means splitting into multiple files easily – for example by chapter.
  5. Markdown means one master format that can be converted to all the others as needed.

Converting my Original Word Doc to Markdown

The original of my book was in doc (written in Microsoft Word). But I was able to convert it seemlessly to markdown with just the following three steps:

  1. $ sudo apt-get install pandoc
  2. If it was actually doc, then open the file in word or open office and “save as” docx.
  3. copy the docx file to whatever directory you want your book files to be located
  4. $ pandoc –extract-media . originalfile.docx -o
  5. NOTE: The “.” after –extract-media and before originalfile is the basepath you want the media extracted to.A “media” folder is automatically created at that location and the files are extracted into that media folder.


Some sources that seem useful. I will probably convert this to a references section as a fill the body of this article out over time, but for now you’ll have to get there yourself to get the info!

  2. (Markdown cheetsheat)
  4. (software from converting from Markedown to other formats)
Jul 31

Fix Your Error Messages Before You Fix Your Errors

I work in software development. I’m a “Senior Software Project Engineer” which means that I work with other people to define their needs (and wants) and then lead a team that design, architects, and implements a solution. As I’ve moved up over the years I’ve worked with a bunch of different people many of whom are experts in their area. Recently, I got a little bit of verbal lashing from one of these people who is the head of IT Operations. While the person almost never says anything nicely they do almost always end up being right so I try to ignore almost everything they say and instead try to get the message, because, like I said, they are actually almost always right and they’re really good. So what were they right about this time?

Our logging sucks. We log all kinds of stuff. Some of the errors we log are legit. But many, if not most, are total garbage. Some of the “errors” are not really errors (that’s another blog post). But what about those that are truly errors? What could be wrong with logging them?

Imagine an error that says “Fatal error during processing.”

Ok. Now what in the work does that mean?  And what should I do about it? Can I just rerun processing? Do I have to do some kind of clean up first? Should I report it to someone? Was the problem related to the software logic, to the environement (disk space, network down, etc), to the input? What the hell am I, guy who’s job it is to make sure work gets done, supposed to do with that error? I suppose I’ll probably just try to run it again and cross my fingers, right? Well I’m only doing that until someone’s software doesn’t clean up after itself [that’s also another blog post) and causes havoc by being restarted… then I’m out of the business of trying to be helpful and I’m in the business of complaining.

Let’s re-imagine that same error now says “Fatal error during processing – insufficient disk space available”. That’s better, right?! Sure it is. I’d much rather have that! But I still haven’t answered over half of this operator’s questions. Can I just rerun? Do I have to clean up some runtime data first? I t’s better but not really completely helpful.

Trying again. “Fatal error during processing – insufficient disk space available. Process requires atleast 1GB available disk space on volume /server/vol1 to run. Create necessary disk space and rerun.’

Now we’re cooking! All the information anybody needs to have is there in the log. Happy Ops people! And honestly, if you’re DevOps, you probably care about this even more because it’ll be you trying to figure out what went wrong. Logs are important, good logs are a godsend.

So next time you’re fixing an issue and you’re digging through code to try to find the cause of a problem – that you only know about because of a log – keep in mind that most of that digging could be avoided by better log messages. Take the time to update your log messages while you’re in there rather than just fixing the bug. You’ll be glad you did and you’ll make life better for you and for someone else!

Jul 26

VB enums default to 0 if they are Nothing

I was writing a test case today and when doing the first step – “make a failing test” – I was having problem… the test kept passing even though I knew it shouldn’t. Eventually I figured out the reason is was passing is because I didn’t understand that an Enum will default to 0 if is actually Nothing.

I was testing a factory that had the very simple job of producing a logger depending on the type of logger that was requested. Here’s the factory code:

Public Shared Function MakeLogger(ByVal logmode As LoggingMode) As I_CustomErrorLogger
   If logmode = LoggingMode.DATABASE Then
      Return New CustomLoggerDBEnabled()
   ElseIf logmode = LoggingMode.DEVNULL Then
      Return New CustomLoggerDevNULL()
   ElseIf logmode = LoggingMode.STANDARD_IO Then
      Return New CustomLoggerStandardIO()
      Return New CustomLoggerStandardIO() ' Assume old way if not told
   End If
End Function

And, as you can see, it takes an object “LoggingMode” which is an enum.

My thought was, for my FAILING test I’ll pass Nothing (this is the vb equivalent of null) as the parameter and I can then check to make sure that the type returned is something OTHER than StandardIO (because the default would be StandardIO). Remember I’m trying to make a failing test so I asserted that the returned logger was of type CustomLoggerDBEnabled.

But it passed! That’s not right I said and reran. Then recompiled. Then restarted visual studio literally thinking something must have gotten cached and the test wasn’t rebuilding or something crazy like that. Then it hit me, maybe null behaves weird with enums. So I debugged it and sure enough the value I passed in as Nothing was being view is 0 in the debugger.

If that’s not bad enough, my enums were “auto enums” (I didn’t define a value for them) and so they started at 0, meaning that the first item in my enum was more or less equal to zero, the next equal to one and so on. As it turns out DATABASE was the first item in my LoggingMode enum and therefore it got the value 0. So in my factory, if it was sent Nothing for the enum it would understand that as 0 and therefore translate it to DATABASE when it came to enum comparisons. All of that happening at the same time, and the fact that I just happened to choose DATABASE  as my FAILING condition made the test pass.

The fix was to update my enums to hardcode the value. so instead of:

Public Enum LoggingMode
End Enum

I did the following:

Public Enum LoggingMode
   ' Today I learned that sending Nothing as a parameter when an Enum is expected defaults the value to '0'
   ' This means that the first item in the list of enums will be selected.
   ' So If we want to send 'Nothing' and have it not match anything in the enums list then we 
   ' have to specify the enum values. This is poop, but it is what it is.
   DEVNULL = 2
End Enum

Now, since I don’t have a value mapped to 0 the Nothing enum will fall all the way to my final else statement and give back the default logger I actually want.

To close this up let me point something not quite so obvious out. Besides the in your face lesson of Enums that are nothing are really 0 there’s something else valuable to take away. Unit Testing works to make better code. You shouldn’t skip it. It finds weird little cases more than I’d like to admit. It just so happens that this one gave me an opportunity to learn a little unrealized nugget about the language rather than the implementation. But it has value! Do it!

Jul 26

Imitation can be Costly Form of Flattery for the Imitated

You may have heard the phrase “Imitation is the sincerest [form] of flattery”. The quote comes from Charles Caleb Colton. And it’s true.  What’s also true is that being imitated online can be costly to those being imitated if they don’t account for it and adjust their way of doing things to minimize its effects.

I recently started a site called Nameinator that was created based on work I’ve done for other sites of my own. I own a bunch of sites about names and I’m working to roll them all into one under that site about name ideas – things from fantasy football team names to boat names.

Historically, what I’ve done was added names to the list and after a certain number were added I’d write a rundown list article of the newest items. But due to imitators (maybe better said thieves), I’m planning to move towards publishing our name ideas articles a little more often instead of adding them straight to one of our lists and then later writing a rundown article that includes them. It seems that doing it the old way we managed to see our names in other peoples articles before we even got to publish an article containing the name ourselves. People were reviewing our lists and taking the names and not linking back – as if the names were their ideas. So we’ve got to do what we can to minimize the effect this can have on us.

For the reader it’s no big deal. Basically we’ll be creating a new type of article that will be less lengthy but very fresh. We’ll still do longer rundown types of articles and we’ll still do articles highlighting user added items. This new way of doing things we hope will allow us to rank on the search engines for our own content before other sites copy it. We want to get our own fresh content straight to you from us rather than someone else giving your their take on our ideas without attribution.

If you’re someone who creates a lot of content you may want to consider posting more often in smaller bits rather than saving up to publish some huge item. Especially if you’re putting the little bits out there in pieces for the convenience of the end reader. Take a little more time and do both so you’ll get ranking for all your hard work.

Feb 06

550.50 On Images in Windows IIS in WordPress

I was having a problem where uploaded images were causing a 550.50 error when users clicked on them to see the original image. That is, images showed up in articles fine, but when the user tried to view the full size version it errored. I was able to fix the problem by setting the permissions of the temp directory that my php.ini file points to such that they matched what wordpress needed. Here’s the two best sources I found on the subject. After the links I’ll post the contents of the page including all the info you need.

So here’s an attempt at answering the question that did pretty well. It didn’t work for me exactly (I think they got IUSR and IUSRS backwards), but it’s pretty much right. Either way I found this other page that was referenced on that one even more useful.

Here’s that page quoted in it’s entirety because can have pages dropped and I’m going to want this info later. It’s original location was at the following url. I’m not linking to it because it’s a dead link, but I will link to the primary page for that site.

Begin Quote:

While creating this blog I ran into a rather interesting problem that took me all of a half day to figure out. If you installed WordPress on a Windows IIS7 Server using the URL Rewrite Module 1.1, you may receive a HTTP 550 error when clicking on an inserted image.

Well that shouldn’t happen now should it.

After much agonizing I concluded the problem lies in the images’ permissions that seem to get set when first uploading them to/from WordPress. The permissions of the original uploaded image seem to NOT inherit the correct WP upload folder permissions; This in-turn, blocks access to the original file on the server. When the original file is called from the blog/internet, the server throws a 550.50 URL Rewrite Error. The same error that you see in the above image.

Now time for the strange part. Thumbnails of the original are created by WordPress using this same original uploaded image. They display just fine when inserted and called. These thumbnails inherit the correct permissions of the Windows/WP  folder where they are stored. So the question: Why does the original uploaded image that is stored in the exact same location as the thumbnails not inherit the same permissions? Checking the original image directly on the server against the thumbnails that were created from it confirmed my theory. Hum… perplexing. Is this really a URL Rewrite Error or not?

So what causes the original uploaded image from inheriting the correct permissions? The answer is incredibly simple but also incredibly annoying.

The Setup:

For the record, I’m running WordPress 2.8.4 on an IIS7 Windows server 2008 platform using FAST-CGI with PHP, MY-SQL & URL Rewrite module 1.1. I’m using a custom permalinks structure of “/%category%/%postname%/”. The following is the web.config code I’m using along with the same file for download:

<?xml version=”1.0″ encoding=”UTF-8″?>
<rule name=”wordpress” patternSyntax=”Wildcard” stopProcessing=”true”>
<match url=”*” />
<conditions logicalGrouping=”MatchAll”>
<add input=”{REQUEST_FILENAME}” matchType=”IsDirectory” negate=”true” />
<add input=”{REQUEST_FILENAME}” matchType=”IsFile” negate=”true” />
<action type=”Rewrite” url=”index.php” appendQueryString=”true” />

Download: web.config file-1.0

The Solution:

PHP is the issue, not WordPress. The problem only happens when you use PHP to upload a file.  When you upload a file, PHP sends the file to a temporary directory on the hard drive (for me it’s C:\Windows\Temp) and then copies it over to it’s intended directory.  Once the file has landed in the temporary directory, it is assigned the permissions of that directory.  The problem is when Windows copies that file, it keeps the temporary directory’s permissions and doesn’t inherit that of your web directory’s.  Bingo!!

The easiest way to fix this problem is to add to the temporary directory your intended web directory’s permissions.  In other words, don’t erase the permissions already in the temporary directory, just add the web directory’s permissions to them.  In Windows Server 2008 the two user groups you must add are: “IUSR” & “IIS_IUSRS”.

If you want to change your temporary upload directory, find the “upload_tmp_dir” in your php.ini file and set it to the directory of your choosing (outside your web folders of course), and then add the proper permissions.

So, just create a new folder named “PHP_uploads” in  “c:\<YOUR_PHP_DIRECTORY>\PHP_uploads\”. Now go to your PHP.ini file and  change it to the new location.

After adding the new folder location to your PHP.ini file add the IUSR & IIS_IUSRS permissions to the new upload folder you just created.

After your all done, delete the previous uploaded images from your WordPress admin console and reinsert them as normal.

Your 550.50 error is now no more!

Feb 06

Programming Languages Popularity Surprising to Me

I came across the TIOBE Programming Community index a few minutes ago and was a little surprised by what I saw. As developers we all know that there’s much more work being done in different programming languages than what we are exposed to each day. But checking out the TIOBE indicator gives a pretty amazing look at what’s popular and what’s being used in the industry.

The TIOBE indicator is an indicator of the popularity of programming languages. The index is updated once a month. The ratings are based on the number of skilled engineers world-wide, courses and third party vendors. Popular search engines such as Google, Bing, Yahoo!, Wikipedia, Amazon, YouTube and Baidu are used to calculate the ratings. It is important to note that the TIOBE index is not about the best programming language or the language in which most lines of code have been written. –



Think about it for a moment. Think about which languages you think are the most popular and then click on the link. How did you do? Are the ones that you use on a regular basis the ones that you thought were more popular (they were for me) and were you right (not really for me)? I think it demonstrates a certain level of bias that many programmers have that what we use is probably the best available – unless we’re forced by someone else to use their own favorite choice. It’s important that we be aware of those bias and what’s going on in the rest of the industry so we can grow as developers and remain up to date on our skill sets.