Mistakes recruiters make

If you’re in the job market, especially in a technology field, you’ve no doubt come across recruiters. You know how it is, you upload your resume to CareerBuilder, Monster, what have you. An hour later your phone wont stop ringing and your inbox is quickly filling up.

Normally, I would think the concept of recruiters is great. I can go about doing whatever I’m doing while someone else finds opportunities for me that match my skill set. In my experience though, I hate this because I’d say roughly 95% of the recruiters who contact me are wasting my time by making some very simple mistakes.

Providing no information

These are the best kind of calls I receive: “Hi, this is Joe Recruiter from ABC Staffing, came across your resume online and have an amazing opportunity I think you would be a great fit for. Call me back as soon as you get this!”

This is a recipe for me to delete the voicemail and ignore any further communication. I know absolutely nothing about what you’re trying to get me into except that you think I’d be a great fit. I’m not going to waste my time calling you back if you think that a “junior” position is a great fit for someone that already has “senior” in their job title.

Not providing enough information

These are the best kind of calls I receive: “Hi, this is Joe Recruiter from ABC Staffing, came across your resume online and have an amazing Java opportunity I think you would be a great fit for. Call me back as soon as you get this!”

Deja vu? Yeah. Very similar to above, but with a bit more information. I now know that this is at least a Java position (which I don’t even want, see next point), but know nothing else about it.

Since this has come up twice, let me be perfectly clear. This is what I consider required before I would respond to a recruiter:

  • Company Name – I want to know who I would be working for so I can research them
  • Position Title – I want to know what my new title would be. Unless the pay was awesome, I doubt I’d move from a Senior level position to a Junior level position
  • Position Requirements – I want to know what I need to know for the job so I can tell if I would be a good match
  • Position Responsibilities – I want to know what I’m going to be responsible form
  • Estimated Pay – I want to know what I would be making. Even a ballpark figure is good (like, $75k / year, depending on experience). What I definitely do not want to happen is for me to go through the recruiting process and get the offer that’s $20k less than what I’m making. That’s a waste of everyone’s time
  • Benefits Summary – I want to know what benefits you offer and what you cover

Otherwise, I’ll maintain radio silence. I’m not desperate for a job, and frankly, you’re the one who’s being paid to place someone.

Not actually reading resume / profile

A recruiter should care about finding you a good job that matches your skill set and salary requirements. We all know this doesn’t really happen; most recruiters just want to place you somewhere so they can get paid. That’s good and well, I mean we all have to put food on the table, but I wish that recruiters would stop and take a minute to actually read my details from wherever they found them.

Take this for example, most of the profiles I have online state that I specialize in just a few technologies / languages, but know enough about others that I can make do. I usually list all this out for completeness, especially since in every job I’ve worked, we’ve had legacy systems, edge cases, artificial constraints, etc, that forced us to use technologies / languages outside of the norm. These are all things I wouldn’t want to work with on a day-to-day basis, but I can if need be. This may be nitpicky, but I would think that with a resume that states “extensive experience in A, B and C” and “basic experience in X, Y, and Z,” I would receive more leads related to A, B, and C. Instead, I’m assuming someone just says, “I’m looking for someone who knows Y,” has my name pop up and fires away.

Even better, I’ve receive a few calls and e-mails a week about great opportunities in California with relocation paid. That’s a good deal, it costs a lot to move across the country. The only thing is that where you said you got my information from explicitly states that I am not willing to relocate.

When you can’t take a few minutes to read over my information, I won’t deal with you.

Not doing their job research

I received many e-mails, especially in the past week talking about amazing opportunities around me paying “market rates” of $50k a year in my field. In my area, $50k is a decent salary for the average worker (in any given field), where about $35k could get you by. Making $15k more than the average worker in the area sounds like a great deal. Oh, except for the fact “market rates” for my field are about $75k ($25k higher) than what they’re claiming. This is pretty offensive to me, it seems like they’re just trying to penny pinch instead of spending a bit extra on a decent worker.

Oh, and on top of that, if they had read my information where they found it (see last point), they would see that I currently make about $30k higher than what they’re offering once company paid benefits and perks are factored in. That’s doubly offensive.

What to do?

Personally, I think the entire recruiting system is broken, and many agree with me. I would encourage everyone who can to not use recruiters; do your own job search and avoid recruiter postings. I realize not everyone can do this (IE, someone who was just laid off and needs a job), but the more people stop using recruiters until they figure it out, the faster that will happen.

5 Reasons Your Mobile Website Sucks

With mobile browsers becoming more and more popular every day, mobile versions of websites are required to be easily accessible to all users.

You have no mobile site

Okay, this one is a little cheap, as your mobile site can’t suck if you don’t have one, but let’s be honest, this is probably one of the biggest faults you can make. Recent data states that mobile users make up roughly 30% of internet traffic. That’s almost 1/3 of users that may come to your non-mobile website on a mobile device, find it not user friendly due to various reasons and go to a competitors site instead. That’s a huge chunk of users to ignore for a fairly simple solution.

You have a mobile site, but you have content on your desktop site that is not accessible on your mobile site

This one is pretty self explanatory. Have you ever loaded a news webpage and get greeted with a message similar to “We’re sorry, this content is not available for mobile viewers?” That something that should never happen, and is a guaranteed way to have users bounce away from your site. Thankfully this is becoming less frequent. If nothing else, you should display the non-mobile site in its stead; showing something, anything is better than saying “we don’t want you to view our site.”

Accessing a mobile link redirects you to your mobile homepage

Ever followed a link from Google to a specific article on a website only to be presented with the mobile home page of that site? When that happens to me, I immediately go back to the search results page and go to another link. The way I look at it, I already did searches to find the content I want on a search engine, I shouldn’t have to do it again to find it on your site. If you automatically redirect mobile users to the mobile site, you should send them to the mobile content equivalent of what they were trying to access in the first place.

You have “download our app” pages or pop-ups

This is a slightly gray area. It’s entirely possible that you have a great mobile app that allows users to navigate your content much more effectively or that users don’t know about. In my opinion, it’s okay to notify the user once per visit that the app exists. There are still sites out there that pop-up the same notification on every page. This should not happen.

You have huge resources (extra sin, auto-loading / playing videos)

This is a huge no-no. With mobile websites, you need to consider a few things that you don’t need to with desktop users, namely that mobile devices don’t perform as well as desktop devices and if the device is using a mobile 3g/4g connection, they may have strict data caps to keep in mind. Auto-playing videos or other media heavy items can lead to unresponsive mobile devices and huge data usage. In general, mobile sites should be opt-in, meaning that the user needs to actively indicate that they want heavy parts to show, for instance by clicking/tapping on a video to play it. In addition, these heavy resources should be optimized for mobile devices; hi-def images and videos should be re-sampled to a lower resolution and you should try to use CSS and HTML5 in place of images and video if possible).

Soylent Adventures

By now you’ve probably heard of Soylent, a pre-prepared powder that is mixed with a small amount of oil and some water to create a “meal” that is supposed to be similar to other meal replacement drinks (think Slim-Fast, etc), but nutritionally complete. I’ve been very interested in this from the get-go, as most of my go-to foods for breakfast and lunch are easily prepped, but are probably lacking in the nutrition department. In addition, I feel that a lot of these don’t keep me satiated as well as they could, which sometimes leads to me eating more than I’d like.

Currently , the lowest price you can pay for Soylent is $255 a month if you buy the maximum of 1 months (28 days) worth or a recurring basis. This factors out to about $9.11 a day, or about $3.04 a meal if you cut the serving into 3. This isn’t a bad deal; I currently pay about $200-$250 for food for myself a month (based on paying about $400-$500 a month for myself and my fiancée). However, even with Soylent, I’d still like to keep regular food around the house, as I’m afraid that I would get bored with just one type of food, so this amount may be too much for people like me. In addition, I’m sure most of you are like me and would like to try Soylent before investing several hundred dollars into it. If you want to just try out a small sample to test the taste, consistency and satiety, you would pay $85 for a weeks worth, which comes out to $12.14 a day, or $4.05 a meal, which is roughly 33% more expensive than with the recurring monthly package.

Thankfully though, the creators behind Soylent are very open about what goes into their product (possibly due to some of the criticisms of their product with regard to how nutrition works), and are very supportive of DIY Soylent. Using the tools they provide, you can create a recipe and add different ingredients together to try to make a nutritionally complete version of Soylent that can be tailored to your specific needs, such as begin vegan, low-carb, high-protein, low-cost, etc.

The Recipe

I thought it would be fun to try to create a version of Soylent for myself using this tool. I decided that I wanted my recipe to try to meet a few criteria:

  • Have a low-barrier entry cost. Ideally, I would keep this below the $85 limit of Soylent, but would want it to be lower than that
  • Use some ingredients I already have at the house to help lower the entry cost for myself, namely Body Fortress Super Advanced Whey Protein
  • Be mostly nutritionally complete. I’m only looking at this for breakfast and lunch, I’d still like the option to eat real food for dinner or elsewhere, so it not being 100% is acceptable
  • Be decently palatable. I’m used to some questionable food choices, and I can stomach about anything if I’m hungry enough, but I’d like for the taste and texture to be decent

With these in mind, I started looking at some of the recipes that already existed. I started off looking at various recipes and found that most of them consist of roughly four basic ingredients: flour, some form of protein powder, oil and a vitamin. These ingredients will usually form up the bulk of your macro-nutrients (carbs from the flour, protein from the protein powder, fats from the oil and basic vitamins/minerals from the vitamin). From there, the recipes deviate, adding in additional vitamin/mineral supplements or other ingredients for taste. The basic 4 ingredients are all relatively inexpensive, usually save for the flour, as most recipes prefer using specialized flour like oat or flaxseed instead of regular wheat flour.

Using this knowledge, I looked at some of the more popular recipes, namely the Hacker School recipe. I created a mock-up of the recipe to see how the nutrition played out, which seemed to fall very short of my goals. I’m not sure if it was because I switched out some ingredients with mostly equivalent items in attempt to get the price down, or if the recipe is just not good to start with, but I wasn’t able to get a decent price while maintaining nutritional value. Looking through some of the other recipes on the site, I saw that that it wasn’t uncommon to undershoot or overshoot most values.

About that time, I read another article on Bachelor Chow and People Chow. These both have similar ingredients, but both way overshot the vitamins/minerals. I set about trying to recreate the recipe to hopefully slightly undershoot the nutrition (remember, I’d like to eat regular food for dinner as well), or at least avoid some of the vast 600%+ overshoots. I replaced the whey protein with a soy protein I found in another recipe, and the protein powder with the one I already had. I replaced the choline bitartrate with a cheaper pill alternative, and the soybean oil with a local generic. I removed the potassium citrate completely after reading the safety section of the Hacker School blog recipe, figuring that I would rather undershoot and have a banana if needed than overshoot. I removed the magnesium because it was overshooting by 200%, and instead put in a generic multi-vitamin (Centrum is listed, I used the generic store-brand of it instead). For the masa, I bought the same type, but at a local store for 1/4 of the price, and the salt I already had a big container of.

Cost

Product Qty Unit Price Total Price
Masa Harina 2 $2.88 $5.76
Iodised Salt 1 $0.67 $0.67
Choline Bitartrate 1 $9.12 $9.12
Body Fortress Protein Powder 1 $15.98 $15.98
Jarrows Soy Protein 1 $13.29 $13.29
Generic Mens Multivitamin (100ct) 1 $6.99 $6.99
Soybean Oil 1 $2.98 $2.98
Subtotal $54.79
Items owned (Salt, Protein Powder, Soybean Oil) -$19.63
New Subtotal $35.16
Tax $2.81
Total $37.97

Here’s the recipe on DIY Soylent. The full cost of what I would have paid was slightly less than what I estimated in my recipe, so it should be pretty accurate. Based on the  data there, this recipe costs $1.96 a day, or about  $0.66 a meal. Not bad.

Preparation

To make the mix, I combined a days worth of the masa, salt, and both proteins into a large measuring cup. I took a Mortar and Pestle and ground up the multivitamin pill and choline bitartrate pill and ran them through a sifter to remove the non-dissolvable coating. Finally, I took 1/3 of the mix, put it in a blender bottle, added 1/3 of the oil (about 17g), and filled the rest of the bottle with water, then shook it up.

 

DIY Soylent

DIY Soylent

 

The Result

The result is about the only thing that could be expected from inexpensive Soylent: lackluster. Imagine dog food for people, ground up and put into water, with a tortilla after taste. I’ve had way worse than this before, so I’m not complaining about the taste much. Even better, the recipe is slightly calorie deficient, I’m sure you could add in some brown sugar for simple syrup to help improve the taste.

The biggest obstacle with the Soylent is the texture. See, masa doesn’t really mix well with water, so right off the blender bottle mixing it has a super gritty texture. It’s not unpalatable, but you could practically chew it if you felt like it. I think this could be reduced somewhat by drinking it through a straw to reduce how much you take in one sip. blending the Soylent also seemed to reduce the grittiness, but not by a whole lot. I read a lot about people allowing the Soylent to soak in the fridge overnight after mixing. I tried this and it is vastly improved the next day, so I would recommend that, but it is still noticeable.

Improvements

I think the biggest improvement that could be made it to cut the masa with a different type of flour, maybe trying to do a 50-50 mix and see how that works. I’ve seen a lot of recipes use rice or oat flour, both of which are relatively inexpensive. You can also make both of these cheaper by buying whole rice/oats, soaking them and then blending them finely. I even saw one person mention that he essentially makes oatmeal with whole oats, blends it then uses that in the soylent, which makes it more smooth. It may also be worth splitting the full-day mixture into fourths instead of thirds and using that, which would allow a higher ratio of water to mix that could improve the texture. This would also be beneficial to me, as I want Soylent for 2 meals a day, it would allow me to make a new batch every other day instead of every day and a half.

The taste could probably also use some work, again maybe by adding a bit of sweetness to it. I think brown sugar or simple syrup would work relatively well, or stevia if you wanted a low/no calorie option. Doubly so if the flour is cut with another type, as it will reduce the tortilla flavor. I’ll have to experiment with this.

Finally, I think slowly adding in potassium and vitamin K supplements would be useful to make it mostly nutritionally complete. I’m not in a huge hurry to do this, as I am still plan on eating regular food, but I think that would be useful to many

Final Thoughts

All in all, I’m impressed by how something so cheap tastes decent and is edible. I’d like to try to improve the recipe some while still keeping the cost down, so any feedback is welcome. Feel free to comment or make variations on the recipe. I think I will also start posting some periodic updates on how it is staying on this for a while, how well it satiates, etc.

Safari Timeout issue

Ran into an issue using the Safari browser that took a while to track down. Whenever our server would come under heavy load, requests to the server could start to take much longer to process, upwards of 10-15 seconds for pages with a lot of data. While this was a whole other issue on its own (which sadly, as we don’t manage our servers, we couldn’t really address), it was made even worse by the fact that many of our users use Safari on our webapps, and these pages would timeout after 10 seconds or so where other browsers would load fine.

The AJAX call being affected was similar to the one below. Note the 30 second timeout.

[code lang=”javascript”]
$.ajax({
url: ‘http://www.mywebsite.com’,
async: false,
success: function() {
console.log(‘Success!’);
},
timeout: 30000
});
[/code]

The key here is the async flag. After a bit of research, I found out that some time ago, Safari changed the way synchronous requests were handled by ignoring the timeout flag and considering the requests timed out if it took more than 10 seconds. I’m assuming this was done for user-experience, as synchronous requests block the page from responding until completed. By forcing the timeout to a small value, they ensure that the page doesn’t hang for long periods at a time. While I don’t think that a browser should dictate the timeout length, and that frankly. it’s still a bad experience when you can’t get a page to load that would load as normal in another browser, it’s still something that needs to be dealt with.

There’s basically two ways to deal with this issue, the first is to modify the request handler server-side to periodically send some blank data to let Safari know the request is still active. This is moderately difficult to set up, and can run into some issues with response parsing. The easier method is to just use synchronous calls. There’s only a small number of cases where asynchronous calls should be used, and if you can’t think of a reason for using one, you should stick with synchronous calls anyway.

Internet Explorer CSS limit

Here’s an interesting one for the day that may hopefully give another reason for people no to use Internet Explorer, or if they do to always use the most up to date version.

Our University requires that we support legacy versions of web browsers, as there are still many a user who are running Internet Explorer 6 on Windows XP. *Sigh* In order to do so, we rely on forcing users to install the Chrome Frame Plugin if their browser is woefully out of date, meaning that they are running Internet Explorer 6, 7, or 8. We “fully” support version 9 (while trying to emphasize why users should at least update to 10 if they are going to continue to use Internet Explorer) and fully support version 10 as well, as it is at least acceptable standards compliant. Since the older browsers are now running Chrome under the hood and the newer browsers are workable, we don’t usually have many compatibility issues between developing the website for Chrome and users using the website on any version of Internet Explorer.

We ran into a fun error a month or so ago where users using Internet Explorer 9 would be presented with a page that looked nothing like what they did in other browsers, looking instead like an MS-Paint imitation version of what we’d actually designed. From the looks of it, it looked like there were random CSS styles that just didn’t apply to the page. in general, the styles applied stayed the same, meaning that the browser wasn’t just loading random styles every page load, it was loading them in order and then failing. We traced this throughout the file and found that they stopped applying about 3/4 of the way through the style sheet. In addition, we could move styles to the top of sheet to force them to apply, which would then remove a different rule. Clearly, there was a hard limit applied here.

We did a bit of research and found an answer. Basically, what’s happening is simple, and incredibly stupid. Internet Explorer version 9 and below (thankfully 10 isn’t included) limit the amount of CSS files and selectors that can be used on a page. Any page can have a maximum of 31 CSS files (no problem there), and each file can only have a maximum of 4095 selectors. In addition, any file has roughly a 288kB file size limit.

While these limits may seem large, and they may have been when Internet Explorer 9 was first released, they can quickly be reached without one realizing it. The main core of our theme, as is the case with many websites today, is Bootstrap. Bootstrap is an excellent example of how quickly these limits can be reached: it is currently 97.2kB minified with over 1400 selectors. That’s 1/3 of the total selectors for one file, and about 1/3 of the file size allowed as well. In addition, we use various other libraries with CSS in our website, such as Font Awesome and more, and that doesn’t even count the tons of extra CSS we’ve written for our website to not look like a standard Bootstrap website and provide styles for all of our components and what not.

If we had done all of the above alone and included them as separate CSS files, it may have been fine, but we had one more attribute to our pages that created this perfect storm. As part of following various best practices in web development, we always try to reduce the number of HTTP requests needed to load a page. One of the major ways we do this is by combining our CSS files into one large file on build which is used on every page. Basically, Bootstrap, other 3rd party libraries, and all of our custom CSS was compiling into one file to be loaded once and then used throughout the entire website.

Now thankfully, as we force Chrome Frame for Internet Explorer versions 6, 7, and 8, those weren’t affected. Additionally, version 10 doesn’t suffer from this error, leaving only version 9. Unfortunately, we weren’t permitted to force Chrome Frame for version 9, as it was considered “up-to-date” by our office of Information Technology, so our website had to support it out of the box without any additional software. So how did we solve it? Simple, in our build, it simply cuts the generated style sheet in half and includes them as stylesheet1.css and stylesheet2.css.

At some point, we may need to cut it into 3 separate files. I’m hoping at that time that Internet Explorer version 9 will be long gone.

OpenGL Camera position woes (with a bonus FPS camera class)

When I’m not dabbling in web development (and to be fair, since I work full time as a web developer I don’t usually do that in my free time) I like to play around in various technologies to expand my repertoire and hopefully learn new concepts that I can use in other fields as well. My latest kick has been getting back into OpenGL. I had attempted to learn OpenGL when I was much younger with a pipe dream of creating the next big game by myself, but never really got past the rendering primitives part, as I had convinced myself that I would never be able to make the 3D models, sounds and music required for a full-fledged game. I had also done this with DirectX and XNA, and while I had gotten farther in terms of a full game with each technogoly, I always ended up stopping.

This time around, I’m using OpenGL through LWJGL so I can cut out a lot of the boring boilerplate that turned me off the first time. To further reduce some boilerplate, I used a generic FPS Camera from Lloyd Goodall. In addition, I’m sticking with a Voxel based project, so that I don’t have to worry about any model meshes or animation (for now, at least). Everything was going well until, when working on Chunk Management, which is essentially loading segments of the terrain around the player and removing ones that too far away so that you can show a “seamless” world while only processing what’s around the player. While I had no issue with the chunk management itself, there was a strange issue with the actual rendering. What was happening is that I would move the camera along the positive X-axis, but the chunks along the negative X-axis would start loading.

After pouring over the chunk manager for days and determining that nothing was wrong there, I started to look at the only part of the project I hadn’t written: the camera. Looking at the code, it uses the following to set up the camera position:

[code lang=”java” title=”Camera.java”]
public void lookThrough()
{
// Rotate the pitch around the X axis
GL11.glRotatef(pitch, 1.0f, 0.0f, 0.0f);

// Rotate the yaw around the Y axis
GL11.glRotatef(yaw, 0.0f, 1.0f, 0.0f);

// Translate to the position vector’s location
GL11.glTranslatef(position.x, position.y, position.z);
}
[/code]

Recall back to your basic OpenGL graphics pipeline, which can be summarized in three big steps: model transform, view (camera) transform, and projection transform. The problem I was having laid in the second step, the view transform.

I don’t claim to be an expert in OpenGL by any means, but I will try to explain what was going on as far as I understand it. Recall that the view transform takes a set up “world” (where each entity that is on the screen was independently rendered from the origin of (0, 0, 0)) and adjusts it so that the “camera” is at the new origin point, and everything else in the world is shift view translation, rotation, scaling, etc to reflect that. The camera class I used made the assumption that the translation and rotation of the camera was for the camera itself, which would make sense if OpenGL supported actual cameras. However, recall that it does not, everything is rendered as if a “camera” was at the origin. As such, the view transform actually transforms the world itself, not the camera. Because of this, we have to translate the world by the negative camera position, which will correctly set the camera’s position in the translated world. I’ve drawn this crappy MS Paint diagram to hopefully show the different between what was expected, and what actually occurred:

Camera vs World translation

Camera vs World translation

In the end, your camera class will look more like below. I provided the whole class, as the movement code will change to adjust for the new rendering setup (thought the rotation calculations may not have been required to change). Also note that I do not claim to be an OpenGL expert, so this may not be perfect, but it appears to work correctly:

[code lang=”java” title=”FixedCamera.java”]
import org.lwjgl.opengl.GL11;
import org.lwjgl.util.glu.GLU;
import org.lwjgl.util.vector.Vector3f;

public class Camera {

// Camera constants
private final float CAMERA_SPEED = 20.0f;

// Camera position
private Vector3f position = null;

// Camera view properties
private float pitch = 0, yaw = 0, roll = 0;

// Mouse sensitivity
private float mouseSensitivity = 0.25f;

// Constructor
public Camera(float x, float y, float z) {
this.position = new Vector3f(x, y, z);
}

// Used to change the yaw of the camera
public void yaw(float amount) {
this.yaw += (amount * this.mouseSensitivity);
}

// Used to change the pitch of the camera
public void pitch(float amount) {
this.pitch += (amount * this.mouseSensitivity);
}

// Used to change the roll of the camera
public void roll(float amount) {
this.roll += amount;
}

// Moves the camera forward relative to its current rotation
public void walkForward(float distance) {
float d = (CAMERA_SPEED * distance);
position.x -= d * (float)Math.sin(Math.toRadians(yaw));
position.z -= d * (float)Math.cos(Math.toRadians(yaw));
}

// Moves the camera backward relative to its current rotation
public void walkBackwards(float distance) {
float d = (CAMERA_SPEED * distance);
position.x += d * (float)Math.sin(Math.toRadians(yaw));
position.z += d * (float)Math.cos(Math.toRadians(yaw));
}

// Strafes the camera left relative to its current rotation
public void strafeLeft(float distance) {
float d = (CAMERA_SPEED * distance);
position.x += d * (float)Math.sin(Math.toRadians(yaw-90));
position.z += d * (float)Math.cos(Math.toRadians(yaw-90));
}

// Strafes the camera right relative to its current rotation
public void strafeRight(float distance) {
float d = (CAMERA_SPEED * distance);
position.x += d * (float)Math.sin(Math.toRadians(yaw+90));
position.z += d * (float)Math.cos(Math.toRadians(yaw+90));
}

public void goUp (float distance) {
position.y += (CAMERA_SPEED * distance);
}

public void goDown (float distance) {
position.y -= (CAMERA_SPEED * distance);
}

// Translates and rotates the matrix so that it looks through the camera
public void lookThrough() {
GL11.glRotatef(-pitch, 1.0f, 0.0f, 0.0f);
GL11.glRotatef(-yaw, 0.0f, 1.0f, 0.0f);
GL11.glTranslatef(-position.x, -position.y, -position.z);
}
}
[/code]

If you’re interested in writing a Voxel engine or game, the Let’s Make a Voxel Engine website is probably the best place to learn about it. Be forewarned though, there is a TON of data that is excluded in this tutorial, so you’ll have to piece together a lot of it from various sources. I’m planning on writing a similar tutorial tailored for LWJGL, but haven’t gotten to it yet.

Apache vs Nginx on a low-resource server

Recently, in the interest of leaning more about system and server administration, and the desire for a faster, non-shared server on which to run my various websites, I’ve invested in a small VPS (virtual private server) at Digital Ocean. For those of you who don’t know, a VPS is essentially a sub-section of a dedicated server in which you have root level access and can manage your operating system, installed programs, and pretty much anything. While the performance may not be as good an equivalently-spec’d dedicated server, they are generally much cheaper, and you can create and destroy multiple virtual services as needed to scale with demand.

While setting up a WordPress site, I ran into a huge issue that I had never ran into a shared hosting plan: I could rapidly refresh the page to crash the server. Basically, if the page was refreshed too quickly, the server would start to run out of memory and crash the MySQL process which would then be unable to restart itself. Unless the service was manually restarted, the server would still serve HTTP requests, but wouldn’t respond with anything besides a brief message saying that the MySQL Server was down. While it’s not unusual for a server to be able to be brought down by a large amount of requests, the fact that this server could be brought down by one person holding down the refresh key for a few seconds was outrageous.

I tried everything I could think of, configuring MySQL and Apache for a low-resource setup, creating a large swap space with the left over SSD space I wasn’t using, and limiting requests to prevent unintentional overloading. While some of these steps helped and would help the server stay up through a larger amount of concurrent requests, I could still single-handedly crash it. I started investigating the error logs of MySQL to find the source of what crashed the MySQL service, which turned out to be an out-of-memory error. Basically, what seemed to happen was that the memory of the server would run out which would cause the MySQL service to crash and become unrecoverable. Using the top command in SSH, I analyzed what was happening with the processes running, and found the following:

Top level processes of server being DDoS'd with Apache

Top level processes of server being DDoS’d with Apache

As you can see, there’s quite a bit of system resources being put to use on Apache alone. This is not to mention all the resources that are also going to go to PHP, MySQL, and all the other system level processes required to run Ubuntu. Clearly, this was a problem, as if Apache started hogging on the system resources during a large amount of requests, there was nothing else to be done to tune the other processes being run.

After looking into the problem, I found that there are many people who have found that running Nginx instead of Apache helps their servers run faster with less resource usage (including the creators of WordPress, ironically enough). This sounded especially good to me, as I am running on a low resource server (1 core CPU, 512 MB RAM, 20 GB SSD). As I had no experience using Nginx, I reset my VPS to a clean installation, then followed a tutorial series on installing Nginx with WordPress, which was surprisingly simple and only took about 20 minutes to do. After getting everything set-up, I attempted to crash my server again while watching a top view. Here’s what happened:

Top level processes of server being DDoS'd with Nginx

Top level processes of server being DDoS’d with Nginx

As you can see, the resource usage is much lower. Where Apache was using hundreds of megabytes of RAM on a large number of instances, Nginx ran only two processes, each using about 10 megabytes each. While the requests began to have longer response times, I didn’t have any timeouts occur. In addition, I could run a serving test using Blitz.io (1-100 users over 60 seconds) without dropping a single request, where Apache would being dropping requests about 5 seconds in.

Overall, this is phenomenal performance over Apache. While I’m sure with better hardware and configuration, you could get comparable performance with Apache, and that some advanced features may require it, I do think that Nginx should be used as the primary HTTP server on any setup.

Don’t use jQuery? Why not?

Everyone once and a while, you see a blog post or website complaining about people using jQuery. There’s a lot of reason for people to say this, but from what I’ve seen, it really boils down to just a few reasons:

  1. Pure Javascript is faster – In almost all aspects of life, better speed creates better applications.
  2. Page bloat – jQuery adds anywhere from ~30 Kb to to ~180 Kb, depending on whether you use a production or test version
  3. jQuery encourages bad code – Since it’s so easy to learn, only amateurs use it and they write bad code
  4. Real Javascript coders don’t use jQuery – I don’t even know what to say about this
  5. Licensing issues – Some people are afraid of using third-party libraries, but jQuery is licensed under the MIT License, which is completely open source
  6. Well… that’s about it

The fact of the matter is, these are all bad excuses. In fact, people who say any of these just don’t understand what jQuery is actually for. Many people think that jQuery exists to replace Javascript, but in reality, it is meant to complement it. jQuery’s main advantages are a low learning curve, small more re-usable code and cross platform compatibility. That last one is the most important.

Think about it this way. People use Java for several reasons. First, it’s easy to learn, much easier than lower-level languages such as C++, C, or even Assembly. It also has reusable packages, such as string and network utilities, so that you don’t have to rewrite that over and over again. Finally, writing an application once will let it run on any operating system (for all intents and purposes), as long as it has a JRE installed on it. Seeing some similarities?

jQuery serves the same purpose as Java, just for the web. It’s much simpler to learn than pure Javascript, it provides tons of simple APIs for DOM manipulation, event binding and handling, AJAX calls, JSON support and tons more. Finally, it supports just about every OS/browser combination. Does your DOM traversal check whether or not the Blackberry browser returns nodes that no longer exist? Sure it could, but jQuery already does. Can you write custom versions of the APIs in jQuery? Absolutely. Will pure Javascript run faster than jQuery. Almost always. But will it check for the tiny browser quirks in between? Probably not. The thing that makes jQuery so great is the fact that you don’t have to worry about this. Tons of edges cases have been found, addressed, and thoroughly tested by the jQuery team and other open source contributors.

Finally, just to address the bad code quality code, jQuery is a lot like PHP. PHP isn’t necessarily a bad language. Well, it is, but that’s besides point. PHP gets an even worse rap because, like jQuery, it has a low learning curve and is a popular choice to begin programming. Unfortunately, many of the amateurs write tutorials which many beginners use as a starting point, so a lot of PHP programmers never learn how to write good code in the first place and the problem repeats itself. However, good programmers can write very good PHP. Facebook, which everyone knows as one of the most visited sites today, used to, and still does to an extent, use PHP. In the same manner, if you follow some jQuery best practices, the code that you produce as a result will be quite fantastic.

Check your extensions while in development mode

I ran across an interesting error a few days ago while working. We have an application where a user is able to enter in information about themselves in sections. One of the many sections allows a user to associate their account with various organizations, communities, etc, which is aptly named affiliations.

While developing this section, we didn’t notice any problems, as each module code for each section is based off of the same base code, which should imply that if one section doesn’t work, the others shouldn’t as well. However, when the application was first demonstrated to the clients, we came across the fact that the section wouldn’t load. Since we checked this again when we got back to our development environment, and saw that it was working, we wrote it off as a temporary fluke in the testing server, or maybe a file that had not been successfully pushed to the server.

However, when the application was sent out for the clients to test, we received multiple reports of the section not loading for some of the testers. Looking into the environment for each user, we saw a common trait of it not loading for those using Firefox or Chrome, which is odd, since we figured that if any browser was experiencing the error, it would be Internet Explorer, which we have to support. Even stranger is the fact that when we compiled down the application with RequireJS, we found that the error in question was un-reproducible.

If you’re like me, when you get into a situation like this, where you have absolutely no idea what is going on, you just start to try random things, so we tried everything we could think of. We went through each line of code in the file looking for any obscure syntax or reference errors, hoping that maybe RequireJS just couldn’t parse the file. We reduced the file that wasn’t loading down to an empty AMD module that return a blank object, but the file still didn’t load. Finally, we tried to load a duplicate of the file called asdf, instead of the original. Oddly enough, this last ditch effort worked.

It was about this time that a co-worker noticed that the section loaded in a fresh install of Google Canary, but not in his regular Chrome. Since the only really difference between the two was that the Canary had no extensions or add-ons, he set about trying to sync the two’s settings to see if that helped. After disabling all extensions and add-ons, he found that Chrome then successfully loaded the file. He set about enabling them one by one until the file stopped loading to see what was blocking the file. The culprit? AdBlock (A fantastic extension by the way, use it if you don’t already).

Turns out, naming a file affiliations was not the best name to give the file. Apparently, AdBlock will make assumptions on requests it thinks may be ads, or anything else that might be blocked. The file in question met a bunch of what we assume is probable criteria: it was a javascript file that could run any number of loading/displaying functions and it was called affiliations, which is dangerously close to affiliates, which would normally indicate some sort of website or company that the website itself benefits from, eg, an advertisement. Though the file itself was legit, AdBlock thought there was enough criteria to stop it from loading. Turns out what we thought was a very complex, in depth and edge case error was a incredibly simple oversight that no one thought to check.

I think there are two very important lessons in this anecdote. First, when you have an error that you have no idea what’s causing it, check to make sure that your extensions are not causing the issue. Second, you should always try to check your application with some common feature-changing extensions installed, to see what they do. If we had not discovered this error, we may not have been able to tell a client anything other than “we don’t know why it doesn’t work for you; it works for us and everyone else!”

jQuery – Remove handlers from global AJAX functions

If you’ve used the jQuery Ajax functionality, you may be familiar with the Global Ajax Event Handlers, a nice set of functions for binding callbacks on to the various stages of an Ajax request. For instance, you can bind functions on ajaxSend and ajaxComplete, which are called when the request is sent and completely successfully, respectively, to show a “Saving…” and then “Saved” message. You could also bind a handler onto ajaxError, which is called whenever the request fails, to pop up a dialog describing what happened. This way, you don’t have to copy and paste a bunch of handling code throughout your various $.ajax calls, and instead can handle them in one centralized location.

Unfortunately, while the jQuery team provides great methods to assign handlers to these events, they don’t have any (at least, that I’ve seen), to remove those handlers from their respective events, as you can with jQuery’s .on() and .off(). While they may not have been a huge issue a few years ago, as the global handlers are reset every time the page reloads, it’s become more of a problem with the emergence of single-page applications, where the page may not necessarily refresh before loading a new page.

We ran into this problem in a site I manage that uses this architecture, where navigating to a new page would tack on an additional ajaxSend and ajaxComplete handler, without removing the old one. Since there is no clear method to remove these handlers, and I couldn’t find any solution looking around online, I started to dig into the jQuery source, hoping to find the underlying way these were stored to see if I could find a way to remove them manually. What I found was actually pretty interesting:


[code lang=”js” title=”jquery/src/ajax.js”]
// Attach a bunch of functions for handling common AJAX events
jQuery.each( [ "ajaxStart", "ajaxStop", "ajaxComplete", "ajaxError", "ajaxSuccess", "ajaxSend" ], function( i, type ){
jQuery.fn[ type ] = function( fn ){
return this.on( type, fn );
};
});
[/code]

For setting up each global event, they simply run a loop for each event name, and create a new function with that name that aliases to .on() with the function/event name as the event type. This is essentially saying at each function translates to the following:


[code lang=”js”]
$.fn.ajaxStart = function ( handler ) { return this.on(‘ajaxStart’, handler); };
$.fn.ajaxStop = function ( handler ) { return this.on(‘ajaxStop’, handler); };
$.fn.ajaxComplete = function ( handler ) { return this.on(‘ajaxComplete’, handler); };
$.fn.ajaxError = function ( handler ) { return this.on(‘ajaxError’, handler); };
$.fn.ajaxSuccess = function ( handler ) { return this.on(‘ajaxSuccess’, handler); };
$.fn.ajaxSend = function ( handler ) { return this.on(‘ajaxSend’, handler); };
[/code]

Sure enough, further down in the code, I found some evidence to confirm this. Within the bowels of the $.ajax() function lies this code:


[code lang=”js” title=”jquery/src/ajax.js”]
if ( fireGlobals ) {
globalEventContext.trigger( "ajaxComplete", [ jqXHR, s ] );
// Handle the global AJAX counter
if ( !( –jQuery.active ) ) {
jQuery.event.trigger("ajaxStop");
}
}
[/code]

The line is question is globalEventContent.trigger("ajaxComplete", [jqXHR, s]); which uses .trigger() to force a ajaxComplete event to be fired, which will be handled by the previous .on() statement. To test my theory, I tried the following in the clean-up routine of the page, which is called whenever the user navigates to a new page:


[code lang=”js”]
// .ajaxComplete() and .ajaxSend() should always be attached to the document,
// so we will try to unbind them from there as well
$(document).off(‘ajaxComplete’).off(‘ajaxSend’);
[/code]

And sure enough, it worked perfectly. The handlers were unbound on each page navigation and re-bound when the page was re-loaded. Instead of having each handler stack up until it was called hundreds of times on each ajax request.

Remember, you can also target specific functions like you would with .off().