Main menu:

September 2014
« Aug    

WordPress websites under attack…again

Last year, it was the wp-login.php brute force attack where bots kept trying to log on to websites by guessing the user name and password.
A basic step in protecting a WordPress website is not to have a user called ‘admin’.
For the last two weeks, there has been a wave of new attacks.
The new brute force attack tries to exploit XMLRPC in WordPress.
I was seeing thousands of requests to /xmlrpc.php per minute today.
I immediately went to cloudflare control panel and changed the Basic protection level to “I am under attack”. This gave me some breathing space while I figured out how to deal with this.
My first guess was to simply deny http access to that file.

<Files "xmlrpc.php">
Order Allow,Deny
deny from all

However, this isn’t very effective as it will generate a mass amount of 404 pages which WordPress still has to process. It is no better than deleting the file itself.
As a final resort, I used a htaccess rule to redirect access away from the file.
The advantage of this is no high CPU or memory usage.
I added the following code to my .htaccess file

RewriteRule ^xmlrpc\.php$ "http\:\/\/0\.0\.0\.0\/" [R=301,L]

This is not an optimal permanent solution but it will have to do for now unless someone has a better suggestion :) . The attack itself lasted over 8 hours.

Update: The vulnerability that caused this denial of service has been fixed in WordPress 3.9.2 and the above workaround should no longer be needed.

System Administrator Appreciation Day

It’s on the 25th of July this year which is today and there is even a website for it.
A day to appreciate the people who keep our websites, databases, applications and internet connections running.

But…umm…this doesn’t look quite right.

WordPress SEO tips without plugins

If you are using a theme on wordpress that doesn’t get updated much often, this may mean it is time to hack on that theme :) .
A lot of functionality can be replicated using pieces of code without the need to install plugins. Some plugins like all-in-one-seo can use an extra 10MB of memory every page load. The following are some SEO tips that can be added to your wordpress theme.

1. How to generate a sitemap.xml file without using a separate plugin. Add the following code to your theme’s functions.php file

add_action("save_post", "generate_sitemap");
add_action("delete_post", "generate_sitemap");

function generate_sitemap() {
$postsForSitemap = get_posts(array(
'numberposts' => -1,
'orderby' => 'modified',
'post_type' => array('post','page'),
'order' => 'DESC'

$sitemap = '<?xml version="1.0" encoding="UTF-8"?>';

$sitemap .= "\n".'<urlset xmlns="">'."\n";

$sitemap .= "\t".'<url>'."\n".
"\t\t".'<loc>'. home_url('/') .'</loc>'.

foreach($postsForSitemap as $post) {

$postdate = explode(" ", $post->post_modified);

$sitemap .= "\t".'<url>'."\n".
"\t\t".'<loc>'. get_permalink($post->ID) .'</loc>'.
"\n\t\t".'<lastmod>'. $postdate[0] .'</lastmod>'.

$sitemap .= '</urlset>';

$fp = fopen(ABSPATH . "sitemap.xml", 'w');
fwrite($fp, $sitemap);

This will generate a static sitemap.xml file on saving posts and deleting posts. It will also regenerate the sitemap when you import posts using the wordpress import tool.

2. How to prevent tags, categories, attachments and archives from being indexed by google. You really only google to index your posts and pages. To do so, add the following code in your theme’s functions.php file.

// Add noindex meta tag to tag and category pages
function noindex_meta_tag() {
if(is_paged() || is_author() || is_tag() || is_date() || is_category() || is_attachment()){
<meta name="robots" content="noindex, follow"/>
add_action('wp_head', 'noindex_meta_tag');

This adds a noindex meta tag to*, date archives, etc.. which are things you really don’t want appearing in search results. Remember that duplicate content in search results is bad.

3. Google webmaster tools complains you have no description meta tag. The following code produces description meta tags in all your pages. The front page will use your website’s slogan while other pages and posts will use a generated excerpt.
Add this code to your theme’s functions.php file.

// Add description meta tag to posts
function description_meta_tag() {
<?php if (is_single() || is_page() ) : if ( have_posts() ) : while ( have_posts() ) : the_post(); ?>
<meta name="description" content="<?php the_excerpt_rss(); ?>" />
<?php endwhile; endif; elseif(is_home()) : ?>
<meta name="description" content="<?php bloginfo('description'); ?>" />
<?php endif; ?>
add_action('wp_head', 'description_meta_tag');

4. Let us say you have your own domain and you use cloudflare. How to get the real IP of your visitors instead of cloudflare IP?
There is a plugin for it but if you want to, just add this to your theme’s functions.php file.

add_action('init', 'cloudflare_ip',1);
function cloudflare_ip() {

Now your wordpress installation can have additional functions without extra plugins that use extra memory or ask you to upgrade to premium versions.

Scheduling virus scans on Arch Linux

Virus threats under Linux are mostly negligible. But I might still want to keep my system free from virus infected downloads and such. There is no need for on-access scanning and a weekly disk scan should do.
Under Arch Linux, I have clamav installed with freshclam service enabled so it auto updates the virus signatures.
Let us say I want to scan my /home partition once a week.

I will create a unit file /etc/systemd/system/clamscan.service

Description=Home Directory Virus Scan

ExecStart=/usr/bin/clamscan --log=/var/log/clamav/clamd.log --remove=yes --recursive /home/ --infected

and a timer file /etc/systemd/system/clamscan.timer

Description=Home Directory Virus Scan



Finally I can type:

systemctl enable clamscan.timer
systemctl start clamscan.timer

Running systemctl list-timers shows the active systemd timers.
And that’s it. Now clamav will scan the /home partition once a week, delete infected files and log its activity to clamd.log :) .

Berlin, the first city with its own TLD

The top-level domain .BERLIN publicly was made available for domain registration earlier this week. The idea for a TLD for the German capital came up in 1999 but it wasn’t until 2012 that ICANN announced an application process allowing for the introduction of new TLDs. So in January 2012, the dotBERLIN organization finally submitted the application for .BERLIN.

This makes Berlin the first city in the world to have its own top-level domain. Until now, most German businesses and organizations used Germany’s .DE TLD but it is now possible for local companies and Berlin residents to register an internet addresses under .BERLIN, their own city. This makes it easier for people to tell where a business is located.

Banks to pay for extended Windows XP support

In today’s news, the internet is outraged at big banks such as HSBC for claiming they can pay Microsoft a lot of money to extend support for the embedded edition of windows XP which powers their ATMs. Current extended support is scheduled to end this April.

By “the internet”, I mean people who read a lot of useless articles on the internet and comment on them…you know…like myself!

Android 4.4 Kitkat on HTC Desire HD

This is a guide for installing the latest Android KitKat 4.4.4 on HTC Desire HD. The guide assumes that you have adb and fastboot on your computer and that your phone is already rooted with a custom recovery installed.
First of all, download the latest unofficial nightly build from Then download the following BaNkS GApps: GApps_Minimal_GoogleSearch_4.4.4_signed,, and
The minimal GApps + google search package makes the hardware search button on the Desire HD functional. Your phone won’t be any faster with the plain minimal GApps package and you will just be sacrificing functionality.
Now connect your phone to your computer with the USB cable and copy the files to your sdcard either using USB storage or through adb. To use adb, just type in a terminal window:

adb push /sdcard/

Repeat the same for the 3 gapps zip files and then type:

adb reboot recovery

Once you reboot to recovery, do a full wipe. That means format /data, /system and /cache. Formatting /system is somehow necessary when moving between one rom vendor to another. There is a chance you will be stuck in a bootloop if you don’t format /system before installing a new rom.
If your recovery allows it, also format /boot.
Now as a further step that can solve some GPS issues, type the following while you are in recovery mode:

adb shell
dd if=/dev/zero of=/dev/block/mmcblk0p13
dd if=/dev/zero of=/dev/block/mmcblk0p14

This step is necessary when moving between rom versions or vendors.
Now that our system is clean, we need to flash the CM rom and GApps. You need TWRP or 4ext recovery or higher. Select “Install Zip from sdcard”. Repeat that for the four zip files on the sdcard.
If your bootloader is unlocked but you are not S-OFF, you will need to install the boot image separately. Extract the boot.img file from the CM 11 nightly zip file. Navigate to it from a terminal window. While you are still in recovery, type:

adb reboot-bootloader

and then type:

fastboot flash boot boot.img
fastboot reboot

Now reboot to your fresh new android installation.
You will be greeted by a dialog that allows you to log onto your CM account, your google account and recover your installed google applications and settings.

The next step is to tweak some settings. Go to Settings -> About phone and keep tapping “Build number” till development settings are enabled. Now click back, scroll down and select “Performance”. Click Ok on the warning dialog and enable “Force high-end graphics”. Turning this option on makes your phone a bit faster because it shifts some load from the CPU to the GPU. Then click on memory management enable “Allow purging of assets” and “Kernel samepage merging”. After that, reboot your phone.

That was it! Enjoy your KitKat installation.

References: XDA CM11 DHD thread

Arch Linux and static libraries

Last year or the one before, Arch Linux turned on the makepkg.conf !staticlibs in the default configuration. This meant that after being compiled and while being packaged, static libraries get removed from the package. Packages that absolutely need static libraries can add options=(‘staticlibs’) to the respective PKGBUILD to keep the static libraries.

The advantage of this is smaller package sizes and an application that depends on libxml2 will automatically use the new libxml2 shared library if Arch Linux patches some security bug in libxml2.

But how about applications that have binaries and libraries in the same package? Take for example libarchive. Instead of using the --disable-static configure flag, the static libraries get removed at the end.
Is it the same thing? No. Without --disable-static, bsdtar and bsdcpio are statically linked against the internal libarchive library instead of resulting in a 1Megabye increase in package size over --disable-static.

If i build libarchive without --disable-static. /usr/bin/bsdtar is 623776 bytes but if I add --disable-static, it is only 56672 bytes.

Obviously not all packages use autotools or even accept the --disable-static configure flag, but I think Arch Linux should have added that configure flag to as many packages as possible before auto-removing static libraries.

How to stop XML sitemap from appearing in Google search results

I use the All in One SEO Pack Plugin to generate my sitemap.xml file. For some reason, google decided to place the actual sitemap.xml in the search results instead of just the included urls.

After searching on google, I found suggestions to add the following to my .htaccess file.

<FilesMatch “sitemap.xml”>
Header set X-Robots-Tag “noindex”

<Files sitemap.xml>
Header set X-Robots-Tag “noindex”

Neither worked and I figured out that they wouldn’t since those are virtual files and the plugin generates them when the sitemap.xml is requested.
Instead, I opened wp-content/plugins/all-in-one-seo-pack/aioseop_sitemap.php and I looked for the following line:

echo ‘<urlset xmlns="">’ . "\r\n";

I added the following line just after it then saved the file and uploaded it.

header("X-Robots-Tag: noindex", true);

This adds a x-robots-tag noindex http header to sitemap.xml requests and google doesn’t index the file. To verify, try opening your through web-sniffer and look for the X-Robots-Tag:noindex http response header :)

Colorful birds

I saw this cage outside some shop on my way home today. I’m not sure what those birds are called but I liked the colors. They looked so little and cute.
And yes I know, the picture should have been in landscape mode :P but I took the picture quickly before someone would see me and wonder why I was taking a picture :) .

Colorful birds