visual studio code tasks

visual studio code has a nifty, but not well documented feature: tasks.

vscode supports the notion of a task runner where you can run tasks in the IDE. as node friendly as vscode is, there isn’t very good documentation on how to setup tasks for a node-based project.

grunt and gulp are potential and popular task runners in this case, but i think that npm should be the task runner of choice for node-based projects. vscode already parses a node.js’ package.json to derive the debugger start command and npm has a facility to run commands like start and test.

a sample vscode task runner configuration for a node project would look like:

webpack – karma and istanbul code coverage

part of the difficulty of using something like webpack is that the source code that you edit is not the final code that is used in your project. so when it comes to unit testing your code, you have to figure out how to instrument what webpack is bundling.

suppose that you are using karma as your test runner with mocha, chai, and sinon. you can add istanbul to the mix to add code coverage, but you need istanbul to understand how webpack bundles files.

luckily, there’s istanbul-instrumenter-loader. you can add this as a loader which will add the instrumentation of the code in your project as part of the webpack process.

a sample karma.conf.js file:

the big server migration – day 4

doing just a little bit of this migration at a time is starting to wear on me. i’ve decided to hunker down and move everything else over. it is turning out that i just happened to pick the problematic sites to migrate over first because the rest of the migrations are starting to move over quite nicely.

the gallery is the last bit that i was nervous about since it was all custom code written about decade ago, but it seems like PHP hasn’t changed much since then so i’m a-ok. not sure if that’s a good or bad statement about PHP, really. i guess it’s more the problem statement of PHP.

i believe the migration is complete. DNS just needs to propagate and we’re pretty much golden.

i was just thinking that i need to figure out how to do domain-based routing with haproxy so entire domains can be reverse proxied over to node. now i need a VM to play with haproxy configurations since i’ve productionalized my only instance. *sigh*

although, with as cheap as linode is, it may be worth it to just spin up a new linode as my dev server to play with. hmmmmm.

the big server migration – day 3

ugh, i’m sick and restless so after being in bed all day, i decided to get up and apply myself a little with what else? yet another blog migration to the new server.

i think i’m going to try to migrate 1 blog a day and see how that treats me. it seems EVERY blog has a different issue. this time around, there was some issue with one of the themes that i had been using on the old host that made wordpress silently freak out and fail. http 200. blank screen. no PHP errors. no databases issues. it just didn’t want to work. lovely. i am so over wordpress…if only my users were too.

that being said, i was able to migrate another blog over and am just waiting for DNS to propagate before i call it a complete success.

in the mean time, i’ve been toying around with the idea of a new tech stack to do my development work in. something node based, but i think it’s time for me to finally go through my list of technologies that i always wanted to check out, but never really got to:

  • webpack on a node.js stack
  • has koa’s time come yet?
  • is sails.js the way to go?
  • lazo vs. rendr vs. ezel
  • bootstrap 3 vs. foundataion 5 – when will this ever end?
  • gulp
  • can we realistically embrace web components yet?
  • is react + flux or some variant really that good?
  • speaking of flux, the sheer number of variants that have spawned into existence seems to indicate that the gist of flux is right but there are some gaps that other people are trying to fill.

    the big server migration – day 2

    life is moving along a little better. the blogs migration doesn’t look as bad as initially anticipated. i think i’ve successfully migrated the first blog over. i’ve changed the DNS to hit the new server, so we’ll give it a day or so and make sure that nothing wonky is going on. if all looks good, i’ll get to moving the rest of the blogs over this week and see how quickly i can deprecate the old server.

    linode has been pretty impressive so far, i’m quite happy with them. is hosted there and that was enough for me to give them a gamble.

    i’m a little anxious about the reverse proxy with haproxy working with named virtual hosts. it *should* all work at this point, but you just never know. that being said, i’m already thinking about my first node-hosted project should be here. maybe i’ll port the gallery over to node, it’s sad that all of my personal work is so outdated and using technologies from years past.

    i keep forgetting how conservative ubuntu’s package defaults are. apache2 doesn’t have mod_rewrite enabled and it took me a while to remember how ubuntu enables all of these configurations. it’s very modularized and file-based so you just have to add a bunch of soft links, but it makes it hard for me because i’m so used to 1 big config file. i get the value prop of the way ubuntu does it, and i can imagine all of the tooling that you can build around this, so i guess this shows how long i’ve been out of the game.

    the big server migration – day 1

    i’m embarking upon a big server migration moving from a “co-lo” to cloud-based VPS hosted solution. the cloud provider i’m going with is linode. my first attempt at launching an instance of a VM failed and i had to contact their customer support.

    their online customer support migrated me from one box to another and i was up and running quickly thereafter. the CS rep responded to my online support ticket within 30 minutes of opening it, so that seemed pretty good.

    so far, i’m pretty impressed with linode. all of the pain of being a sysadmin, however, is coming back to me and what’s worse is that i’ve decided that i really want to complicate the architecture of my little box…because i can.

    this box largely is responsible for hosting several wordpress blogs, but it’s also home to some personal projects. most of the legacy stuff is all LAMP-based so once i get wordpress working, everything else should hopefully fall into place.

    the one wrinkle in this is that i really want a good playground with some node.js projects i have in the background. i’ve decided to reverse proxy all requests coming into this box with haproxy. nginx is so much simpler to setup for this kind of straightforward proxying, but i really love the admin stats that haproxy provides.

    haproxy is working great now, it always takes me a while to fiddle with it before i get it just right:

    we’ll call it a day there.

    atom vs. sublime text

    about a month ago there was a lot of hype about atom from the folks at github. the hubbub was great and there was a lot of excitement about what kind of offering github would present to us. once i got my invite, i eagerly installed it and gave it a test run.

    user interface
    they say that imitation is the sincerest form of flattery and when it comes to the look, feel, and even UX for atom, they took many pages out of sublime text’s book. in fact, i was shocked to see just how similar atom and sublime were. they look and behave so similarly that this is actually atom’s greatest downfall because comparisons will naturally be drawn.

    atom is being billed as a hackable text editor. this slogan is, in fact, prominently displayed on their home page: “a hackable text editor for the 21st century”. at first blush, as a developer, this sounds awesome. i can hack my text editor to do whatever i want it to do? FAN-TAS-TIC. except, when i started to use atom, i realized something: i don’t want to hack my text editor. actually, i don’t want to even think about my text editor. i want to think about my code. yes, it’s great that you can make the text editor do what you want, but is that something that most developers are going to do? probably not.

    but what does it really mean to have a hackable text editor? at its core, you expect a text editor to do certain things for you. open files, open multiple files, open multiple files in multiple panes, edit files, save changes, and the like. these are all obvious use cases. it’s in the packages where you tweak the editor’s behavior and extend certain functionalities that developers get that added productivity.

    well, it turns out that atom and sublime have the same mechanism, just implemented in a different way. sure, atom’s plugin community is still growing (heck, even the editor is still in semi-private beta), but i’m sure that there will eventually be feature parity with atom and sublime.

    atom will go through the same growing pains that sublime text did back in the day. there will be multiple implementations of the same thing with each plugin developer vying to be the one used by the masses. just take a look at the number of jshint plugins that are available for atom now. about three weeks ago, no linter for javascript existed and now they are springing up left and right. finally, someone has written an uber linting framework where you can plug in different language linters, but this sort of stuff is just going to happen until atom matures.

    the fundamental difference
    so what is the fundamental difference? github’s approach with atom is to bet on web technologies: HTML, JS, and CSS to build an IDE. this means node.js is the language for plugin development and embedding a browser in an app to render the IDE. in theory, this sounds like a really promising and exciting venture. leverage the power of node.js’ npm registry!

    to me, this is where there is real promise. the idea that you can write these plugins with node code speaks to me as a front end and node.js developer because it’s something that i’m familiar with and love. that’s cool. but being cool isn’t enough. do i think that i will build plugins for atom? it seems unlikely since other people have already built the things i need, but even if they didn’t, would i really want to pick an editor that will just let me build functionality that i need from it because it doesn’t exist?

    to me, that’s where i start to not care. i don’t care if my text editor uses python, node, or BASIC. so long as it has all of the features i want, i’m good.

    on top of that, performance for atom is noticeably slower. sure, this is currently in beta, but i wonder how realistic it is to expect performance from a DOM-based editor to compare against a native app. i’m not even complaining about opening big files, i’m talking about scrolling around, there’s just a hint of a delay there that i’m not used to. maybe this can be optimized and it won’t be an issue.

    at the end of the day, if you can figure out all of performance issues, to me, this comes down to a choice for developers. will plugin developers prefer a node-based architecture over a python one? will a node-based solution provide more capabilities than a python based one?

    honestly, i don’t know. atom has a long way to get to feature parity with sublime which is what i feel like its initial goals are. if that is all that they are aspiring to do, then there’s not much innovation there (aside from the underlying technology to get to feature parity). what i really want from my IDE is for it to improve my productivity, to help me make fewer bugs, and to not get in my way. right now, atom isn’t making any of those promises, but no one else is either.

    JavaScript auto-complete in Sublime Text Editor

    sublime text editor is currently, by far, my favorite text editor/IDE. it’s power, though, comes from its extensibility. packages are where sublime text’s power is fully unleashed. there are tons of them out there and many, many blog posts about the best ones.

    recently, i’ve discovered one that enables easy javascript auto-completion for sublime: tern for sublime. the install is not quite as straightforward as other plugins, but it does a pretty good job at providing auto-completion for javascript. based off tern, if you are a front end developer, i’d recommend checking it out.

    jenkins job dependencies

    out of the box jenkins provides a mechanism where you can kick off one job once the current job has been completed. you can chain several up and downstream jobs so that a build pipeline can be created. this is ideal if you have a workflow where you have a build job and then want to run a test job immediately after the build. this functionality is great, but a little limiting.

    for example, let’s say that you have a workflow like this:

    if you want to deploy your code, you want to ensure that your code:

  • has passed all of its unit tests and have passed static code analysis (job 1)
  • build the project (job 2)
  • run your test automation (job 3)
  • and then finally deploy your code (job 4)
  • in jenkins, you can set all of these upstream jobs so that once you run job 1, it’ll trigger job 2, 3, and 4. but what if you want a different workflow? what if you want to run job 1 on every commit but not do anything else? what if you want some of your jobs to depend on others, but not necessarily always be triggered from some job all the time?

    the jenkins parameterized trigger plugin comes to the rescue. with this plugin, you can tell jenkins that the current job has a prerequisite that another job needs to be successful and you can block the current job’s execution until its dependencies are met.


    now when we trigger the build code job, we’ll make sure that the static code analysis passes, then build, the project, and then run test automation. if any of the jobs fail, the pipeline won’t continue.

    additionally, we can now have job 1 trigger on every commit and not have to worry about wasting resources and kicking off builds just because someone committed some code to the repository.