Stylized Alex Ward logo

Blog

Views Link Issue with Replacements

So, I was working on getting a particular view which rattles off a list of categories working to link off to another view that has a Contextual Filter on said category field to list off all matching content items with said value in the field. Similar to Taxonomy terms, but being linked from a select dropdown. I was using the output as link option, with a replacement filter to put the value of the item into the url. Everything was pretty straightforward, except for one little snag. One of my categories had a forward slash in it. Now, normally I would think this / would be url encoded to %2F, but that wasn't happening.

I did some digging, by utilizing hook_views_pre_render to modify what it was sending to the display, and I found that it was in fact running through drupal_encode_path. Here's the full text of that method:

return str_replace('%2F', '/', rawurlencode($path));

As you can see, that's just urlencoding the string you're sending to it, and then decoding the slash. That means that no matter what I do in hook_views_pre_render, I am totally unable to get that / encoded properly for the url. Bummer.

Eventually I hacked it by making a separate view for the offending item (there was ony one), but for now it looks like my best option is to not allow slashes in that particular drop-down.

Aegir Virtualhost files on Ubuntu

Just a very quick note, as I had to do a bit of hunting to find where the heck Aegir was generating my virtualhosts for all of the domains I was managing through it.

By default it is located inĀ /var/aegir/config/server_master/apache/vhost.d/, where that server_master part might change depending on what host you're modifying (if you're in a complex hosting situation, for example).

This comes in handy if you want to install SSL certificates yourself instead of going through Aegir to handle the certificates.

PsySl Service Locator Experiment

Yesterday, during a conversation with my coworkers on how Dependency Injection works with the Service Locator pattern, we were talking through what would happen with cyclical dependencies. Basically, what would happen if you told a Service locator to load a dependency for one class, and that class had a dependency for the class that was originally calling the Service Locator. This would cause an infinite loop, and in PHP, Maximum Memory Exceeded error pretty rapidly.

In PHP an example would be:

class A
{
public $b;

public function __construct(\PsySl\ServiceLocator $sl)
{
$this->b = $sl->resolve('B');
}

public function getSomething($llama)
{
return "llama = $llama";
}
}

class B
{
public $a;

public function __construct(\PsySl\ServiceLocator $sl)
{
$this->a = $sl->resolve('A');
}
}

$sl = new \PsySl\ServiceLocator();
$sl->register('A', 'A')
->register('B', 'B');

$a = new A($sl);

Class A attempts to resolve a dependency for class B, class B then tries to get a dependency for class A, and so on. There is a way to avoid this, turns out, if you're willing to make some concessions.

This is where my PsySl comes into play. It is basically a small experiment to see if I could deal with the cyclical dependency loop. Here's the meat of ServiceLocator class:

public function resolve($name)
{
if (!empty(self::$openDeps[$name])) {
$res = new StubResolver($name, $this);
self::$neededResolutions[$name] = $res;

return $res;
}

$resolvedDependencyKey = null;
$trace = debug_backtrace();
if (!empty($trace[1]) && !empty($trace[1]['class'])) {
$resolvedDependencyKey = array_search($trace[1]['class'], self::$dependencies);

if ($resolvedDependencyKey !== false) {
self::$openDeps[$resolvedDependencyKey] = true;
}
}

if (!isset(self::$dependencies[$name])) {
throw new Exception("Unable to find dependency $name");
}

if (!isset(self::$resolvedDependencies[$name])) {
self::$resolvedDependencies[$name] = new self::$dependencies[$name]($this);
}

if (!empty($resolvedDependencyKey)) {
unset(self::$openDeps[$resolvedDependencyKey]);
}

return self::$resolvedDependencies[$name];
}

There's a lot going on in this method. First, it checks to see if it has any open dependencies (things that have called the resolution stack but have yet to be completely resolved). If it does, it creates a new StubResolver and sets the dependency it is a mock for. Next, it checks the stack trace to see if the calling class is one of the things it is registered to resolve (and really, it should probably check to see if it is the construct method, since otherwise it wouldn't cause a loop). If this is the case, it appends it to the list of open dependencies and continues on. The last bit is it tries to resolve the original dependency it was trying to resolve, closes its open dependency, and continues along. Additionally it keeps a copy of the dependency it created in a static variable so it StubResolver can later resolve its dependency without entering an infinite loop. (Which consequently means the dependencies have to be singletons, a problem I'm not sure if it is possible to get around).

That's about where the magic happens. In our new A($sl) example, A tries to get a copy of B, and in the process registers A as an open dependency. B then tries to resolve A, and is returned a StubResolver instance for its "A" dependency. B finishes resolving, and everyone is happy.

But wait, now B has a StubResolver in place of its $a parameter. How does it fully resolve to A? Well, StubResolver has a magic __call method which does the following:

public function __call($method, $params)
{
if (!isset($this->_dependency)) {
$this->setDependency($this->_serviceLocator->resolve($this->getDependencyName()));
}

return call_user_func_array(array($this->_dependency, $method), $params);
}

So, as you can see, there is a lazy load of the dependency the first time you call a method on it. You would need to implement __get and __set which will also do the same, and forward those along to the dependency as well.

All of this is incredibly impractical and breaks all kinds of conventions, but it was a fun exercise to dig a bit into the associated patterns and how php can be manipulated for evil (or good). You can view the full source over here: https://github.com/cthos/PsySl

Profiling PHP with Xdebug

My last post was about using ab and jmeter to get some performance benchmarks on a website. Now, I figure I should probably mention how to profile your application to look for memory leaks, bad calls, and what portion of the code is taking the most amount of time.

First thing to do is get xdebug installed. There are several ways to go about this, the easiest being using a package manager to do the hard work for you. For example, if you're running Ubuntu with the default php from apt-get, you would do:

sudo apt-get install php5-xdebug

Failing that, the xdebug site has a full set of instructions on how to install and configure it: http://xdebug.org/docs/install

Okay! Now that it's installed (and presumably working), jump into php.ini (usually going to be in /etc/php.ini on linux systems, I hear it's in /usr/local/etc on freeBSD for example).

There are a couple of configuration settings you can set in order to turn on profiling. Here are how they work (and they're all listed here as well: http://xdebug.org/docs/profiler):

xdebug.profiler_enable=1 - This will turn on profiling for all requests, which is not a great idea as it will generate a lot of cachegrind files quickly.

xdebug.profiler_enable_trigger=1 - This turns on profiling if XDEBUG_PROFILE=1 is present in the GET/POST, or if you have a cookie set with that name (content doesn't matter)

xdebug.profiler_output_dir=/tmp - Sets the profiler's output directory!

xdebug.profile_output_name=cachegrind.out.%t - This will name the file "cachegrind.out.{timestamp}". There are lots of options and you can see them here http://xdebug.org/docs/all_settings#trace_output_name

So, you usually want to turn on profiler_enable_trigger, and then pick out a page you want to profile (maybe one that is fairly slow) and then add ?XDEBUG_PROFILE=1 to the end of the url. Go check the directory you set in profiler_output_dir, and check out the cachegrind files it generated.

What the heck do you do with cachegrind files?

If you're on Windows, you'll want to dig out wincachegrind. On Linux, kcachegrind. On Mac, I've tried out MacCallGrind, it's a paid app, but the trial allows you to open files up to 3MB in size (which will generally be enough on smaller page loads, but if you have a serious call stack problem it won't cut it).

So, fire up the cachegrind parser of choice and load up a cachegrind.out file of your choosing. You will be presented with a window that looks like this (WinCacheGrind).

Learning to read the output isn't too difficult, the left side of the window is the entire call stack, starting at the outermost call and allowing you to drill down into all of the subsequent calls. You can also do that on the main panel. The different columns of the main window are the amounts of time each call took to execute. Self is the total amount of time that particular invocation took. In the sceenshot my main file (but nothing it included) took 1ms. The Cum. column is the cumulative amount of time that all of the child calls inside of the main method took (aka 22 seconds).

Just stepping through the entire application can be useful, but more useful is being able to sort through the most expensive calls that are happening in your application. Simply click on the Overall tab, and you will be presented with a screen like the following.

As you can see, there are several more columns which provide you with more information. Avg. Self is the average amount of time that individual call took across all of the invocations. **Avg. Cum. **is the average amount of time that the call took including all of the child calls that method invoked. Total Self is the total amount of time the individual call took across all invocations. **Total Cum. **not surprisingly is the total amount of time the call including child calls took. Calls is the total number of times that method was called during script execution.

That Total Cum. column is the one you will want to look at in order to determine where large spikes of time are being used up. As you can see, I've sorted on that field, and php's call_user_func_array is taking the largest amount of time. That's not particularly useful, though you can then drill down into the call stack and get to the meat of the problem by doubleclicking (from the line-by-line section) until you find a more concrete problem. How do I know it's not actually call_user_func_array's fault? For one, Total Self is 18ms, which is almost nothing. For two, it doesn't do anything except make another call.

Drilling down, I find that ultimately the longest call is split between view->build and view-render, indicating I have a problem with a drupal view I'm inculding on a bunch of pages. Going even deeper, the majority of the time in those calls are also because of a mysql call, indicating a query problem being the bottleneck.

Optimizing Mysql is beyond the scope of this article, but it is next in my series of site optimization articles so stay tuned!

Website Benchmarking

I've recently been evaluating different website benchmarking tools, so I though I would take a moment to highlight two of them I have used recently.

ab

ab is the Apache Benchmark tool, and it comes bundled with Apache. It's a pretty simple cli tool, used to test the throughput of a website. It has a bunch of different options you can pass to it, but the most important are -c (number of concurrent connections) and -n (number of requests). It's man page is pretty well written, so I'll let you explore the other options on your own.

So, here's a sample of testing alextheward.com using ab:

ab -n 200 -c 20 **http://**alextheward.com**/**

(Make a note here, that you need to specify the protocol, and the page, otherwise ab will complain)

And the results of that benchmark:

ab -n 200 -c 20 http://alextheward.com/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking alextheward.com (be patient)
Completed 100 requests
Completed 200 requests
Finished 200 requests

Server Software: Apache/2.2.22
Server Hostname: alextheward.com
Server Port: 80

Document Path: /
Document Length: 19858 bytes

Concurrency Level: 20
Time taken for tests: 26.306 seconds
Complete requests: 200
Failed requests: 0
Write errors: 0
Total transferred: 4076928 bytes
HTML transferred: 3986634 bytes
Requests per second: 7.60 [#/sec] (mean)
Time per request: 2630.642 [ms] (mean)
Time per request: 131.532 [ms] (mean, across all concurrent requests)
Transfer rate: 151.35 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 69 2121 1728.4 1554 9394
Processing: 0 296 372.5 148 2138
Waiting: 0 63 166.3 0 897
Total: 331 2417 1776.0 1846 9684

Percentage of the requests served within a certain time (ms)
50% 1846
66% 2623
75% 3189
80% 3586
90% 4958
95% 6051
98% 8059
99% 8518
100% 9684 (longest request)

Looks like I could be doing a better job serving up web requests! Lowering the concurrency certainly helped the test, getting the request time down to < 1000 ms for 90% of requests, so I need to see what's going on with Apache when I'm serving up concurrent requests.

There's another gotcha with AB. It cannot handle ssl requests coming from a server with a self-signed cert. There does not appear to be any way to tell it to ignore ssl errors either.

Apache Jmeter

Jmeter is actually really cool, but it does have a bit of a learning curve. I've attached a couple of images which show a basic configuration.

First thing you have to add is a thread group. This is the place where you tell it how many threads to run on, and how many requests each thread is going to request. After that, you need to add HTTP Request defaults, so that you can specify the default server and the default request uri. Next, you add a HTTP Request sampler, and give it the uri you want to test. You can add as many of these as you want. Finally, you need something to read the results. I've added 2, one which shows the sample results, and another which shows the average request times over an interval.

After you hit the run button, you will see results in the resulting screen!

There's actually a pretty good intro over here: http://jmeter.apache.org/usermanual/build-web-test-plan.html

It will give you a pretty good intro to how to do web benchmarking with it. There are a bunch of other features which are outside of the scope of this post, but it's a pretty good tool for doing all kinds of performance testing.

CiviCRM API Part 2

A very brief continuation of my CiviCRM API exploration from the other day.

I have gotten everything up and running, but there are a few more things which one needs to be aware of when querying against the search API.

Custom Fields

You can query against custom fields which you've defined for your CiviCRM contacts, but you have to query them with the name custom_{id} where ID appears to be an auto incrementing key, that you will either need to look up in the database, or querying against the CustomField api, which will allow you to figure out what the appropriate ID is.

Changing the number of records returned

As far as I can tell, this isn't documented anywhere, but the variable you want to pass is rowCount. This is useful when you want to get all your custom fields, as the default rowCount is 25.

Searching on multiple fields defaults to an OR relationship

Or, as near as I can tell. So, basically if you're wanting to query against first_name and last_name, it will give you records that match either of the fields. I'm not presently aware of how to change that behavior.

Anyhow! Finally up and running.