browserify, jquery, and jquery plugins

i’ve been playing around with browserify and i’m still debating whether or not it’s really the next big thing for me. complicating my development with a build process needs to buy me something that i really want. the whole concept of browserify is to be able use commonjs modules in the browser by bundling all of your dependencies together. what does that really mean? it means that your server side and client side code can look very, very similar. in fact, you can even use the same node modules that you used on the back end in the front end.

this lets you write code that looks like:

to be able to work out all of the dependencies that your javascript has, a grunt task is introduced to scan through your javascript and include all of the dependencies found within your “require”d code.

that spoke to me quite a bit because i liked the fact that my front and back end code now looked very similar. but then i ran into a hiccup. what about jquery? it turns out that jquery works reasonably well if you just require it, but where i got stuck for a while is what if you wanted to use a jquery plugin?

i’m not completely sure that this is the right way to do it, but it works:

what’s different? binding $ and jQuery into global scope seems to be the key here. and actually, all you really need is window.jQuery, but I like having $ in global scope so i throw that in. but the question is why does this work?

if you look at any jquery plugin boilerplate, what it looks like is:

the plugin is expecting to find the jQuery object in global scope. usually, when jquery is included in the page, this happens automatically, but because of the way that jQuery is loaded via browserify, this isn’t happening. so the above hack just inserts it into global scope so that the plugin can find what it wants to then define itself.

it all seems to make sense, but it feels like there’s got to be a better way to do it. i just can’t figure out what that better way is. but i can’t get over the fact that it feels dirty to me that i need to write additional code just because i’m using browserify. it just feels like i’m doing something wrong here.

atom vs. sublime text

about a month ago there was a lot of hype about atom from the folks at github. the hubbub was great and there was a lot of excitement about what kind of offering github would present to us. once i got my invite, i eagerly installed it and gave it a test run.

user interface
they say that imitation is the sincerest form of flattery and when it comes to the look, feel, and even UX for atom, they took many pages out of sublime text’s book. in fact, i was shocked to see just how similar atom and sublime were. they look and behave so similarly that this is actually atom’s greatest downfall because comparisons will naturally be drawn.

atom is being billed as a hackable text editor. this slogan is, in fact, prominently displayed on their home page: “a hackable text editor for the 21st century”. at first blush, as a developer, this sounds awesome. i can hack my text editor to do whatever i want it to do? FAN-TAS-TIC. except, when i started to use atom, i realized something: i don’t want to hack my text editor. actually, i don’t want to even think about my text editor. i want to think about my code. yes, it’s great that you can make the text editor do what you want, but is that something that most developers are going to do? probably not.

but what does it really mean to have a hackable text editor? at its core, you expect a text editor to do certain things for you. open files, open multiple files, open multiple files in multiple panes, edit files, save changes, and the like. these are all obvious use cases. it’s in the packages where you tweak the editor’s behavior and extend certain functionalities that developers get that added productivity.

well, it turns out that atom and sublime have the same mechanism, just implemented in a different way. sure, atom’s plugin community is still growing (heck, even the editor is still in semi-private beta), but i’m sure that there will eventually be feature parity with atom and sublime.

atom will go through the same growing pains that sublime text did back in the day. there will be multiple implementations of the same thing with each plugin developer vying to be the one used by the masses. just take a look at the number of jshint plugins that are available for atom now. about three weeks ago, no linter for javascript existed and now they are springing up left and right. finally, someone has written an uber linting framework where you can plug in different language linters, but this sort of stuff is just going to happen until atom matures.

the fundamental difference
so what is the fundamental difference? github’s approach with atom is to bet on web technologies: HTML, JS, and CSS to build an IDE. this means node.js is the language for plugin development and embedding a browser in an app to render the IDE. in theory, this sounds like a really promising and exciting venture. leverage the power of node.js’ npm registry!

to me, this is where there is real promise. the idea that you can write these plugins with node code speaks to me as a front end and node.js developer because it’s something that i’m familiar with and love. that’s cool. but being cool isn’t enough. do i think that i will build plugins for atom? it seems unlikely since other people have already built the things i need, but even if they didn’t, would i really want to pick an editor that will just let me build functionality that i need from it because it doesn’t exist?

to me, that’s where i start to not care. i don’t care if my text editor uses python, node, or BASIC. so long as it has all of the features i want, i’m good.

on top of that, performance for atom is noticeably slower. sure, this is currently in beta, but i wonder how realistic it is to expect performance from a DOM-based editor to compare against a native app. i’m not even complaining about opening big files, i’m talking about scrolling around, there’s just a hint of a delay there that i’m not used to. maybe this can be optimized and it won’t be an issue.

at the end of the day, if you can figure out all of performance issues, to me, this comes down to a choice for developers. will plugin developers prefer a node-based architecture over a python one? will a node-based solution provide more capabilities than a python based one?

honestly, i don’t know. atom has a long way to get to feature parity with sublime which is what i feel like its initial goals are. if that is all that they are aspiring to do, then there’s not much innovation there (aside from the underlying technology to get to feature parity). what i really want from my IDE is for it to improve my productivity, to help me make fewer bugs, and to not get in my way. right now, atom isn’t making any of those promises, but no one else is either.

JavaScript auto-complete in Sublime Text Editor

sublime text editor is currently, by far, my favorite text editor/IDE. it’s power, though, comes from its extensibility. packages are where sublime text’s power is fully unleashed. there are tons of them out there and many, many blog posts about the best ones.

recently, i’ve discovered one that enables easy javascript auto-completion for sublime: tern for sublime. the install is not quite as straightforward as other plugins, but it does a pretty good job at providing auto-completion for javascript. based off tern, if you are a front end developer, i’d recommend checking it out.

switching to linux (sort of)

i’m trying to find a faster way to cross-compile raspberry pi binaries and it’s come to this: i’m installing linux on a spare laptop to see if i can get chroot cross-compiling to work.

it’s been a while since i’ve installed linux as a primary OS, but i use it all the time for servers. when your expectation of linux is a command-line interface, you have a pretty low bar to hit in terms of user experience. but when it is the main OS that is running your apps, there’s a far higher bar of expectation and that bar is nowhere near close to being met on a default install of linux.

to be fair, i am using ubuntu 12.04 LTS, so there’s a good chance that things may have improved in ubuntu 13.x, but i’m not quite ready to make the plunge into that release.

my first impression was the UI was actually quite pleasant. the window manager was relatively slick, there’s a Mac Spotlight/Windows Start search experience to quickly launch apps from the keyboard. the default background was clean and attractive and my first impression was pretty good.

but then things took a turn for the worse. wifi didn’t work and it took additional installs of drivers and setup to get wifi to work. even then, i still have to manually connect it to the access point. i’ll have to figure out how to get that connected at boot time instead.

the most striking and painful realization of how far linux needs to go, though, is opening up a browser. the internet looks broken. it looks broken because the, what i thought was standard, microsoft web safe fonts are not installed on ubuntu by default. instead, fonts fell back to system defaults and everything about the internet felt foreign.

installing infinality and the microsoft web safe fonts (tf-mscorefonts-installer) made looking at the screen bearable again.

to mimic the os x workflow that i’m accustomed to, i installed Do as a quicksilver replacement and cairo dock as a dock replacement. Just the addition of these two apps have made my transition to linux more seamless.

there are weird quirks though. google chrome requires a plugin to make the backspace key perform a browser back.

luckily, there’s a native dropbox client. there’s an evernote clone called everypad which works, but is still pretty rough around the edges, i think i might prefer the web page over everypad.

sublime text editor seems to work as well, though i’m not sure how much i’ll be coding on this machine.

that being said, i think i can get this linux install to be working the way i want it to enough so that i would prefer this linux distro over windows. i still prefer mac os x over linux, but the gap between the two is certainly closing for routine tasks like web browsing.

jenkins job dependencies

out of the box jenkins provides a mechanism where you can kick off one job once the current job has been completed. you can chain several up and downstream jobs so that a build pipeline can be created. this is ideal if you have a workflow where you have a build job and then want to run a test job immediately after the build. this functionality is great, but a little limiting.

for example, let’s say that you have a workflow like this:

if you want to deploy your code, you want to ensure that your code:

  • has passed all of its unit tests and have passed static code analysis (job 1)
  • build the project (job 2)
  • run your test automation (job 3)
  • and then finally deploy your code (job 4)
  • in jenkins, you can set all of these upstream jobs so that once you run job 1, it’ll trigger job 2, 3, and 4. but what if you want a different workflow? what if you want to run job 1 on every commit but not do anything else? what if you want some of your jobs to depend on others, but not necessarily always be triggered from some job all the time?

    the jenkins parameterized trigger plugin comes to the rescue. with this plugin, you can tell jenkins that the current job has a prerequisite that another job needs to be successful and you can block the current job’s execution until its dependencies are met.


    now when we trigger the build code job, we’ll make sure that the static code analysis passes, then build, the project, and then run test automation. if any of the jobs fail, the pipeline won’t continue.

    additionally, we can now have job 1 trigger on every commit and not have to worry about wasting resources and kicking off builds just because someone committed some code to the repository.

    node.js development on windows

    i’m not much of a windows guy anymore, but was asked to make an install script for developers at work who wanted to write node.js code on windows. the problem with node.js development on windows is not so much about node.js itself, but the limitations of the basic design of the language and its solution. with node.js and its event loop-based architecture, it doesn’t excel at computationally intensive code, so its solution is to compile native libraries and use those to get the performance needed.

    this solution is quite viable, but this also requires you to be able to compile these extensions. this is where there are a few problems. trying to get a C compiler on a windows box is sort of non-trivial, but what’s worse is that some of the libraries out there aren’t ported to windows and then you are dead in the water.

    so if i absolutely had to develop in windows, i’d setup a VM. and if i had to use a VM, i’d use vagrant for headless virtualization management. and if i had to use vagrant, i’d use virtualbox for virtualization because of its built-in integration with vagrant.

    but now the problem is there are a bunch of dependencies that need to be installed before you can even start developing on windows: vagrant, virtualbox, some linux OS, node.js in the linux OS and all of the build tools needed for npm installation for the linux VM. now let’s say that you have a team of windows developers and you want to get them setup quickly. how do you do that? well, that was what i was asked to solve and so i decided to make a windows installer that will install all of the dependencies.

    it turns out that there is a pretty established solution available: nullsoft scriptable install system. it comes bundled with plenty of examples and though i feel that the scripting language available with NSIS is a little awkward, the documentation and resources available for it make it fairly easy to use.

    the trickiest part of the NSIS script was figuring out how to launch an .msi file instead of a standard .exe file. it turns out that you have to invoke it via msiexec.exe.

    so great, you can make a windows installer that will install all of your dependencies. spin up a vagrant box, install all of the dependencies in your VM, then package it up and make an installer that will install everything for you. easy! now you will have an automated install of your development environment.

    making the keyboard rate faster on mac os x

    i’ve felt that the keyboard repeat rate on mac os x was a little slower than i preferred, even at the fastest setting and wondered whether or not that was something that we can make faster. it turns out that you can override this setting on the command line and go just a little bit faster than what the system preference allows.

    run this command in Terminal and you can speed up the repeat rate. i set it to 1, but you can go even faster and set it to 0. make sure you logout and log back in to make the settings go live.

    making the leap to sublime text 3

    i’ve been holding back from taking the plunge into sublime text 3 because the plugin support for it has been slow to port over to 3’s new architecture, but, tonight, after a fit of frustration with some code, i decided i needed a distraction.

    i took an inventory of what plugins i use and it turns out that the plugins i use have been ported over to be compatible with sublime text 3. there’s a great resource can i switch to sublime text 3 that will tell you what shape the plugins that you use are in.

    the biggest change i see in my day to day life with sublime text is how sublimelinter works. they’ve completely revamped the way that plugin works. each linter is now a separate plugin that you need to install instead of being packaged together with the main sublimelinter plugin. what this enables which couldn’t happen was that you can run multiple linters for javascript at one time!

    i love this because i run both jshint and gjslint on my code and have had to resort to hacks to be able to do this, but now i can do it all.

    what to do when the npm registry is down

    it seems almost unthinkable, but the npm registry is having issues.

    that can happen?? so what do you do when the main npm registry goes down? it turns out there’s an EU mirror.

    worked for me in a pinch, i’ll have to remember that.

    using livereload without the browser plugin

    livereload is a little project that dramatically changes a front end developer’s life. it watches the filesystem for changes and will tell the browser to reload the page if it detects changes on the filesystem.

    the idea by itself is pretty awesome. the actual implementation of this is a little weird as there appears to be a standalone app and a browser plugin that communicates with each other to detect the file changes and then trigger a browser reload.

    but wait, there’s something even cooler! the livereload node module is an implementation of the livereload server in node.js. you can hook this into your node application and then you won’t need a separate app to monitor for file changes in your codebase. but still, there is some friction here because you still need that browser plugin so that the livereload server can trigger a browser reload.

    but if you look at what the plugin does, all it does is inject some javascript on the page to call the livereload server, load up a javascript file, and initiate the websocket connection to listen for triggers. it turns out that you can simply add that script call to your page and then you won’t need the browser plugin at all.

    it is technically documented but buried pretty deep in the knowledge base.

    livereload is a separate server running in your app, so you just need to stick the code somewhere after your main server has been initiated, i put it at the buttom of my server.js file.

    livereload accepts a parameter “exts” which tells it which file extensions to watch. you can add as many paths as you want for directories to watch.

    then you just need to add this snippet to the bottom of your page template to have the browser connect the livereload server with your app:

    and voila, if you make an update to a file in your project, your browser will automatically reload for you.