CentOS server management with Spacewalk

This aims to be a quick reference for managing a CentOS server installation using Spacewalk. It will probably take more time than I guess but I’m sure that we will achieve my basic goal: install Spacewalk, create some repositories and subscribe a server to one of these repositories. Easy, uh? :)

I’ve considered to use Scientific Linux in the whole process but I found that CentOS (and his “spiritual” father ;) ) is more popular in the company that I work for. Anyway, I hope that managing SL repositories will be as easy as managing CentOS repositories.

So, now that we know what we want to achieve let’s start from the beginning!

Install Spacewalk

Installing Spacewalk is not as hard as one could think. I’ve started from a minimal installation of CentOS 6 and the process described in the Spacewalk wiki is really close to the actual process. Anyway, regardless of the shortness in some steps explanations, as I’m not a PostreSQL expert I needed to follow the guide in the Spacewalk wiki on how to manually set up the database.
Be careful because the PosgtreSQL server must be initialized and started (I know it’s obvious, but CentOS doesn’t start the server on its installation). To initialize and start the server you can execute the commands below:
$> service postgresql initdb
$> service postgresql start
I’ve decided to use the PostgreSQL because it’s open source while Oracle… well, it’s just Oracle ;)
Just to let you know, I’ve configured the Spacewalk using an answers file (spacewalk-setup –disconnected –answer-file=answers):
admin-email = root@localhost
ssl-set-org = Foo Org
ssl-set-org-unit = spacewalk
ssl-set-city = Barcelona
ssl-set-state = Barcelona
ssl-set-country = ES
ssl-password = spacewalk
ssl-set-email = root@localhost
ssl-config-sslvhost = Y
db-backend=postgresql
db-name=spaceschema
db-user=spaceuser
db-password=spacepw
db-host=localhost
db-port=5432
enable-tftp=Y
Don’t be afraid if you see an error warning you that the database is already installed, just execute the command that the alert gives you to “remove” the installed database and try again.Once you’ve installed and configured Spacewalk you are ready to point to the server IP on your browser and start playing around with it :)

First steps with Spacewalk and setting up repositories

First of all you will need to create an administrator account. In order to do that, Spacewalk has created a webpage that will be shown to you the first time you visit the portal:
Spacewalk - Install
Creating the account shouldn’t be a problem so, we will skip to the repositories set up!
Now, we will synchronize the EPEL 6 repository for 64-bits systems. This can be done both via web and using the command line interface and we will achieve that using both of them. To define the repository we will need first to create it on the portal (Channels > Manage Software Channels > Manage Repositories > Create new repository):
Spacewalk - Channels - Manage Software Channels - Manage Repositories
NOTE: The URL in this picture is: http://dl.fedoraproject.org/pub/epel/6/x86_64/
Once we got the repository configured, we are going to create a software channel (Channels > Manage Software Channels > Create new software channel) and then, go to the Repositories tab of the software channel menu and select the recently created repository:
Spacewalk - Channels - Manage Software Channels - Repositories
At this step, we’ve reached the first serious issue. If you try to synchronize the repository using the web (see the blue link on the grey menu bar in the image above), it won’t work. A valid workaround for that is to use the CLI tools.
IMPORTANT NOTE: It seems like it’s actually working but it could take a lot of time without any progress bar or whatever so, I will stick with the terminal command, which provides me a list of synchronized packages in real time.
To synchronize all the software packages from a remote repository to a local Spacewalk software channel you will need to execute the command below on the command line:
spacewalk-repo-sync –channel ixavi-main-channel
The bare EPEL repository has more than nine thousand packages so better grab a beer (or two, it took ten hours on my laptop) and wait ;) Of course, you would create your repository with just a few packages and test Spacewalk with it but, creating RPM repositories is not the focus of this article so we will leave that for another blog post and use a public one instead.
ISSUE: Taskomatic seems to have some serious problems handling huge repositories (like the EPEL one) it looks like it’s being killed before completing its tasks. In order to avoid some of these issues you can edit the Taskomatic config file in /usr/share/rhn/config-defaults/rhn_taskomatic_daemon.conf and increase some ping default values:wrapper.ping.timeout=36000
wrapper.ping.interval=100

Anyway, it’s still probable that the Taskomatic instance was killed before you’ve done these modifications so, check that it’s running or start it once again executing this command:

/etc/init.d/taskomatic start

Once the repository is successfully updated, you will see its metadata information status in the portal in the software channel details page.

Activation keys and client registration

Before you can register clients on the channel that we’ve just set up, you must create an activation key. An activation key is the interface that will let clients connect to a concrete Spacewalk channel. To create an activation key you should go to Systems > Activation Keys > Create new key. In the following picture you can see how it should look like:
Spacewalk - Systems - Activation Keys
Be careful because Spacewalk adds a prefix to the activation key. So, you will need to include the prefix on the client register.
Once we’ve got the activation key it’s time for registering some client systems. Registering clients is an easy task… but you will need to enable some extra repositories to get the most recent versions of some required packages. The registering procedure is very well documented in the Spacewalk wiki. It might sound awkward but maybe you can download all the required packages manually (they are some ;) ) from the external repositories to avoid these repositories activation.
ISSUE: If you are working with Proxmox VE and fully virtualized VMs with KVM (doesn’t matter which version you are using) you will need to manually modify the VM descriptor to set the UUID. Without setting this identifier some registering commands will fail without detailed error messages in the system logs.
To define this identifier you will need to edit the VM descriptor placed on /etc/pve/qemu-server/???.conf and add the parameter “args: -uuid=xxxxxxx-xxxx-xxxx-xxxxxxxx” where “x” is an hexadecimal. The “args” parameter lets you pass KVM parameters from the Proxmox virtualization system.

Push changes from Spacewalk

Ok, we’ve installed Spacewalk, synchronized an external repository, created an activation key and subscribed a client system but none of these steps will let us manage the software or the configuration of the subscribed system so… how can we do it?To install some RPM package remotely we will go to the Systems page, click on the subscribed server to see its details and then go to Software > Packages > Install. There you will find a list of all available packages for the selected server; to install these packages you will need to select some packages (using the checkbox), click on the “Install Selected Packages”:

Spacewalk - Systems - Systems - Software - Packages - Install

Once you clicked on the button, you will see the next dialog to confirm the event scheduling:

Spacewalk - Systems - Systems - Software - Packages - Install2

Usually, client checks on the Spacewalk server take two hours between them but you can force a client check with:

$> rhn_check

This will force the client to check its status updates on the Spacewalk server and execute the required commands.

And that’s all, I found several errors during all the process but, fortunately, all of them could be solved with just a few google searches. The most complicated was the UUID setting on the VMs that I was using to build the example (Proxmox VE).

Hope you like it :)

Entering into Maven’s World

I was never so close to Maven, I mean, I used it in some projects in the past and never really liked it; I didn’t know it in deep and it kept bothering me downloading all required dependencies all the time (and I really do mean all the time!).
Well, regardless of my past experiences with it, I recently needed some tool to manage a not so huge but complex project. It is a project with “a few” dependencies, unit testing and so on and I was not in the mood for managing it manually. So I had a quick look into this kind of tools and found that Maven still looks like the most mature of them. After that, a convenience friendship between Maven and me arose and I decided to write a simple reference about using Maven (something that I still haven’t found).

Maven_logo

NOTE: This article doesn’t aim to be a reference on how to manage a project with Maven but a quick help to refresh one’s memories of Maven.

Installing Maven

This is a topic that you can easily find it in the Maven official documentation but I will tell you once again (just for the lazy ones).

Step 1

Download it! Yes! It would seem unusual but you need to download one of the latest stable versions of Maven to make it work!
You can download Maven from: http://maven.apache.org/download.html
Did you get it? Amazing! keep on working!

Step 2

Uncompress the package somewhere you can remember. You will need the complete path to those files in the next step.
NOTE: If you are going to install Maven for all users on your computer you will need to chose and appropriated folder (/opt, /usr/local…).

Step 3

Define some environment variables. You will need to add M2_HOME and M2 and modify your PATH to include M2.
In Linux you can do it on your user’s .bashrc file or system wide on /etc/environment or /etc/rc.local, it’s up to you.
For example, in your user’s .bashrc file you would need to add some lines like the ones below:

export M2_HOME=/path/to/apache-maven-3.0.4
export M2=$M2_HOME/bin
export PATH=$PATH:/$M2

Furthermore, you can define M2_OPTS and check that your JAVA_HOME is properly defined:

export M2_OPTS=”-Xms256m -Xmx1024m”
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64

M2_OPTS is useful to configure Java VM parameters to run Maven and JAVA_HOME just points to your current Java VM installation.

NOTE: If you don’t have a Java VM installed you have to install it (it’s NOT an option you must have it, Maven-Java, Java-Maven you know… it’s like peanuts butter-breakfast, breakfast-peanuts butter). I recommend OpenJDK for version 7 and above (Oracle defined OpenJDK as Java SE 7 reference implementation[1]) and Sun/Oracle for versions 6 and below but it’s just a personal recommendation.

Step 4

Refresh your environment variables (reboot, source your files, log out and log in… whatever it works for you) and test that Maven is properly installed by executing:

$> mvn -version

And you will see something like:

Selection_005

Creating a project

Once Maven installed and configured you can create your brand new project. Migrating an existing non-Maven project to a Maven one is pretty much harder than starting from scratch so we will create a new project and see how things work.

Maven can create different kind of projects so, you will need to know which kind of project do you want to generate. If you want an standard Java project this command will be enough:

mvn archetype:create -DarchetypeGroupId=org.apache.maven.archetypes -DgroupId=com.ixavi.java.simpleproject -DartifactId=maventest

If you are going to implement a web application you will need a different command:

mvn archetype:generate -DgroupId=com.ixavi.java.webproject -DartifactId=maventest-web -DarchetypeArtifactId=maven-archetype-webapp

Take into account that these commands creates a folder with your shiny new project, you don’t need to create a dedicated folder to execute these commands. Keep this in mind because almost all other Maven commands requires you to be on the same folder of the pom.xml file.

pom.xml and local reposiry

The pom.xml[3] file is responsible for describing the project structure, it’s the base of all Maven projects and let you control and version (since it’s an XML file) a lot of projects properties. In this file you will define dependencies, Maven plugins, extra repositories and so on. As I said before, almost all Maven commands should be executed from the pom’s folder otherwise commands would fail.

The local repository is the place where Maven will store all the installed artifacts. At the beginning you can think about it as a sorted set of folders for downloaded JARs (it has more functions but this is the most important at the moment) and you will be getting on with it while you keep on working with Maven.

In Linux systems, the local repository is often located in /home/User/.m2 in Win systems I think it’s in MyDocs/.m2 by default but I’m not sure.

Integrating the project into Eclipse

Maven has full support for Eclipse and generating a project descriptor for a Maven project is as easy as execute the command below:

mvn eclipse:eclipse

It will let you import this project into Eclipse from “Existing projects into workspace” but, unfortunately, it’s not enough. 

If you import a simple Java project just like I explained above you will get a lot of compilation errors; that’s because Eclipse doesn’t know where the local Maven repository is placed. To fix this issue you will need to add a variable on your classpath pointing to that repository. You can add this sort of variables on the Eclipse options dialog: Window > Preferences > Java > Build path > Classpath variables > Add. Just add a new variable called M2_REPO with the full path to the local repository as value.

There is a Maven plugin for Eclipse called M2Eclipse that you can install just by adding its site[2] into Eclipse’s software sources sites. I didn’t installed because I like to manage it using the command line but, once again: it’s totally up to you.

Implementing tests and running them

Maven has full support for unit testing, by default it creates a dedicated folder for test files and includes the JUnit dependency. Maven also generates a sample test on the same package of the main Java class (but under the test folder).

To execute all tests contained on this folder (those that are named appropriately) you just need to execute the command below and you will see a complete report at the end of these tests:

mvn test

NOTE: By default all test classes are called “WhateverTest” and extend TestCase (from JUnit).

I will come back with more about Maven but I think that it’s enough to get started :)


[1] https://blogs.oracle.com/henrik/entry/moving_to_openjdk_as_the
[2] http://download.eclipse.org/technology/m2e/releases
[3] POM stands for Project Object Model and you can find more information about this file on: http://maven.apache.org/guides/introduction/introduction-to-the-pom.html

Speeding up my Ubuntu 12.04

My laptop will turn two (years old) soon. Some people would find this computer young enough for their task but they are not me. Actually, it started to feel ancient on my hands a few months ago ;-) Do not mistake! I like it, I like its design, it is slim and light (well… it can be lighter) and it was surprisingly efficient when I bought it… two years ago.

Nowadays, there are amazing ultrabooks everywhere: Samsung series 9, Asus Zenbook, Dell XPS 13, HP Spectre/Folio, Vizio thin+light… They are so thin and so light and so beautiful and so… everything! And comparing those beasts with my puppet would be unfair, isn’t it? I think that my Acer 3820T would lose all possible benchmarks.

Anyway, last week I decided to keep this one and upgrade some components instead of buying a newer one because I don’t want to spend more than 1,000 euros on a laptop that I like and I’m not going to waste several hundred of euros in one that I don’t and that it’s not really what I want.

So, I made a list of upgradeable components and checked that I would bring a second childhood to my laptop just replacing its HDD and 9-cells battery for and SSD unit and a lighter and slimmer battery. This time i’m not in RAM and processor races; both of them are not squeezed enough yet.

AcerTimelineX

SSD Units recommendations

Once decided that I will upgrade my hard disk for a flash memory based one (when my cash flow accepts this operation), I started to find references from former SSD-Ubuntu users. And I found some interesting recommendations:
  • The first one is to disable the swap partition. Their arguments were that disk swapping requires a lot of read and write operations and this could reduce SSD lifetime. I DO NOT RECOMMEND this at all but, configuring your system to consume more RAM before swapping to disk[1] would be a good idea.
    NOTE: Remember! it is possible to configure modern Linux systems to use a file as swap area[2].
  • The second one is to enable TRIM. I had no idea of what TRIM meant before reading its Wikipedia page[3] and a question on AskUbuntu[4] about it. Actually, it looked something very specific on SSD units and I don’t recommend to enable it on current HDD.
  • The last one is about migrating all logs and temporal files into main memory. It ‘s something tricky because you need to know what you are doing. I mean, you need to know what these files can tell you and, later, ask yourself some questions like: Is it smart to lose all your log data every time you turn off your computer to increase the whole system performance? Wouldn’t it be smarter to switch off all your logging? At the end, you won’t keep this information for so much time! As always, nothing is just black or white so, let me explain what I’ve done.

Moving temporary files and logs to main memory

Moving temporary files into main memory is not so risky as moving log data. You need to understand that if you don’t keep your temporary files on disk you can lose your unsaved information in case of power failures or similar but that’s all.
Moving log data into main memory is slightly different. Some programs or services highly rely on their log data (for example, the Apache server is not able to start if it doesn’t have its log folder) and the operating system wouldn’t boot properly if you don’t keep this information safe and sound.
At the end, I decided to switch logs to main memory by default and create a safe_log folder to store these required logs (like the Apache ones) explicitly modifying their responsible software configuration.
To move all these files to the main memory follow these steps:

STEP 1: Add these lines to /etc/fstab

tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0
tmpfs /var/log tmpfs defaults,noatime,mode=1777 0 0
tmpfs /var/tmp tmpfs defaults,noatime,mode=1777 0 0

STEP 2: Stop the rsyslog daemon

$> sudo service rsyslog stop

STEP 3: Delete your temporary/log files

$> sudo rm -rf /tmp/*
$> sudo rm -rf /var/log/*
$> sudo rm -rf /var/tmp/*

STEP 4: Mount your filesystems

$> sudo mount -a

STEP 5: Start the rsyslog daemon (or just reboot)

$> sudo service rsyslog start

NOTE: There was a project called Ramlog that provided a similar functionality with some useful extras like syncing log files into disk before shutting down and so on but it doesn’t seem to have any activity since early 2010.

Creating a new folder for your “safe logs” would be as easier as creating that folder as super user, giving permissions to system users to write on it and updating configurations to place applications logs there.

Results and warnings

First of all, be careful! I made all those changes after five years or more of working exclusively with Linux distributions at home and managing several Linux servers on different work positions. Do not follow these steps without understanding what they mean. Questions are welcomed on the comment section!
Second, and not less important, I’ve improved my laptop performance further than I expected with those changes. At the moment, I will slow down my SSD units deliberations and see how much I can tune this computer without breaking it. I will keep you up to date ;-)
-

[1] Swappiness Wikipedia article: link
[2] Useful information about swap (includes how to configure swap files): link
[3] TRIM article on the Wikipedia: link
[4] Question on AskUbuntu regarding TRIM: link

My experience using Amazon’s CloudFront as CDN – Part III

This is the last post about my experience dealing with (against?) Amazon CloudFront. You can review my previous posts here and here.

cdn

In the previous post we learned how to set up a private content distribution using the AWS SDK for PHP. That’s awesome, isn’t it? But it could be even greater if we were able to build signed links to let some users access our S3 objects (it was the original goal), otherwise no one will access these contents.

Signed links for private content

Private content distributions do not provide public access to your content. You cannot access S3 objects publicly without a valid signature no matter if objects were defined as private or public. This should be this way because we have created these distributions defining ourselves as URLs signers (see “Creating a private content distribution” from part II).

If you think that you can define your S3 objects as public and then set up a private content distribution without dealing with OAIs, you are probably wrong. Obviously, you can do it and it will work, but your users would be able to access these content freely through S3 addresses (okay, they will need to find your S3 root address, but that’s possible, isn’t it?). If this doesn’t matter to you, good for you!

Anyway we will need to create signed links for our users. Let’s see (as we were doing in part I and II, we will keep using the AWS SDK for PHP in code snippets):

$cfInstance = new AmazonCloudFront();
$cfInstance->set_keypair_id(‘your_keypair_id’);
$cfInstance->set_private_key(‘your_private_key’);

$signedLink = $cfInstance->get_private_object_url( ‘distribution_id’, ‘object_path’, ‘unix_time_expiration’);

Once executed, you will get a signed link to access object in ‘object_path’ for a limited time. I always use unix time to limit object accesses but there are more formats allowed in the get_private_object_url method (basically, any string that strtotime() function is able to understand).
And that’s all, no external APIs requests for signed links; that’s why I like CloudFront so much. You can build three hundred links and just a few CPU cycles will be used, no I/O wait, no network latency, just CPU work. Awesome ;-)

Addendum

Playing around with SDKs, APIs and so on is kind of fun but sometimes it’s annoying to edit source code files just to set up your CDN. That’s the reason why, after walking this road to private content distribution, I was still looking for a useful tool to manage OAIs, CloudFront distributions, permissions and everything in the scope of this service.
 rightscale
At the end, this tool appeared and it was the RightScale web panel (something that I had been using for months!). Regardless of its tools to manage servers, templates, auto-scaling arrays and other EC2 features. It lets you manage OAIs, set up CloudFront distribution with OAIs, URLs signers and much more directly on the web. I know that it’s simply the AWS API exposed through HTML forms but it saves me a lot of time and lets me control more distributions than when I was hitting the naked API.

Sincerely, I don’t know if these features are enabled on free accounts but I encourage you to test them and let me know what do you think about it.

My experience using Amazon’s CloudFront as CDN – Part II

IMPORTANT: There is a previous post on this topic where I’ve introduced some interesting concepts about using CloudFront as CDN. I recommend you to read it before reading this one.

Distributing private content using CloudFront

One of the most important differences between CDNs are the way that they allow you to distribute private content. There are several options to distribute private content from a CDN, but depending on the number of private files to distribute is important the way that you can do it.
CloudFront lets you generate temporal signed URLs to allow third parties access private content through these links. CloudFront signatures can be generated using the PHP SDK and the Amazon account credentials without any need of external API calls. If your webpage contains more than one private file to load, usually avoiding external calls can improve your response time. It always depend on the application behavior but adding a latency time more than twice to your response time does not seems a good idea at all.

cloudfrontlogo

OAI’s concept

The easier option when defining the data origin for a CloudFront distribution is to chose an S3 bucket. It lets you to manage your files with your favorite S3 files manager (I have just discovered Dragondisk, and it is extremely useful for managing your Amazon buckets) without further configurations.
Applying this setup will distribute your static files through the CloudFront locations as long as you defined these files as public. If you define some S3 objects as private, CloudFront daemons won’t be able to read the object content and they won’t be able to replicate it across their distributed sites. At this point you will need to start thinking about OAIs.
OAIs are virtual users that can be allowed to access S3 objects without defining these objects as public (they will be kept as private objects but defined OAIs will be able to access them). Creating OAIs, and granting them read access to S3 files are simple operations but they are (IMHO) not clearly explained in the AWS official documentation.Using the  official AWS SDK for PHP, the OAI set up is as easy as:

$cf = new AmazonCloudFront();
$cf->set_keypair_id(‘aws_keypair_id’);
$cf->set_private_key(‘aws_private_key’);

$res = $cf->create_oai(‘referer’);

You will need to remember the “referer” value for future operations. It isn’t possible to create two different OAIs with the same referer so, you will need to use different values for different OAIs.

Setting S3 object permissions

Grating OAIs access to S3 objects is another easy task as long as yo know what to do. Although it is a mandatory step to have a private distribution, in my humble opinion, it isn’t enough highlighted in the official documentation (anyway you will find it if you know what you are looking for).Once again, using the official SDK for PHP:

$s3Instance = new AmazonS3();

$custom_acl = array(
array( ‘id’ => ‘oai_canonical_id’, ‘permission’ => AmazonS3::GRANT_READ ),
array( ‘id’ => ‘aws_user_canonical_id’, ‘permission’ => AmazonS3::GRANT_FULL_CONTROL));

$res = $s3Instance->set_object_acl(‘bucket_name’, ‘s3_object_name’, $custom_acl);

 

Note that you will need to find the OAI and the AWS user canonical IDs, I found them listing OAIs and S3 objects properties using the SDK. Canonical IDs are hashes that seem to be unique all across the AWS services (it is just a personal evaluation, I’ve not confirmed this with anyone from Amazon) and let you grant OAIs access to S3 objects.

Creating a private content distribution

This is the final step to have a private (signed) content CloudFront distribution. When I was working with CloudFront, it wasn’t possible to set up a distribution like this using the AWS web console so, I will show how to do it using the SDK:$cf = new AmazonCloudFront();
$cf->set_keypair_id($yourAWSKeypairID);
$cf->set_private_key($yourAWSPrivateKey);

$opts = array(
‘Enabled’ => true,
‘OriginAccessIdentity’ => ‘oai_id’,
‘TrustedSigners’ => array(‘Self’));
$res = $cfInstance->create_distribution(‘bucket-name’, ‘dist_name’, $opts);

Note that you need to configure the AmazonCloudFront object with additional security credentials and that you will need to retrieve the OAI id to create the distribution.

Once the distribution is created, all objects within will need to be retrieved using a signature from the AWS user (that’s the reason why the TrustedSigners property have ‘Self’ as value). But this is something that I will explain in the next CloudFront-related post.

Rackspace CloudFiles PHP Binding error copying files

I’ve recently evaluated some CDN services and Rackspace CloudFiles, and its Akamai integration, have arisen as one of the most interesting services. This was mainly because Rackspace have the CORS headers as part of its service (something that CloudFront still does not have… what are you waiting for guys?) and because the response times are extremely low (I don’t have any measurement yet but I have the feeling that latency was significantly lower with CloudFiles).

But I’m not going to describe the service that Rackspace can offer as CDN, I maybe will do it in the future but not today. Today I’m going to describe how to fix a problem with the PHP CloudFiles API binding.

When working with the UK endpoint, you cannot use the copy_object_to and the move_object_to functions of the CFObject class because these ones are broken.

In odred to make the PHP binding working again you should only search for the lines below in the cloudfiles_http.php file:

$url_path = $this->_make_path(“STORAGE”, $container_name_source, rawurlencode($src_obj_name));
$destination = rawurlencode($container_name_target.”/”.$dest_obj_name);

And replace them with the next ones:

$url_path = $this->_make_path(“STORAGE”, $container_name_source, $src_obj_name);
$destination = $container_name_target.”/”.$dest_obj_name;

I don’t know why but, these lines are the last updates that the project received (six months ago or so), and reverting them, functions mentioned above were brought back to life.

rax2

My experience using Amazon’s CloudFront as CDN – Part I

Preparing a website for more than 4 million users per month can be a hard task to achieve. Obviously, you cannot start a project from scratch pretending to support this amount of users the first week (if you can support it economically… do you want to hire me? :) ) but maybe you can predict that you will reach a huge amount of visits before the end of the current year.

I’m not going to describe step by step how I’ve tried to support a large amount of users hitting a web application -it’s something that is not written on stone because all applications have different behavior and processes and maybe solutions that worked on my project can ruin everyone else applications- but I want share my experience while dealing with Amazon’s CloudFront.
This deep desire of sharing my thoughts, troubles and successes with CloudFront is due to the underneath complexity of this CDN system. Yes, it seems to be a very basic and simple distribution service and, sometimes, it may lead to think that you cannot go further with the Amazon’s CDN solution but… give it a try. As always, there is room to grow for CloudFront but, let’s see…
NOTE: The project that I’m using while writing this post (or set of posts) is developed in PHP, so I tried to keep my tools as close as possible to this programming language.
cloudfrontlogo

The Beginning

The first approach to CloudFront is always using the console provided directly by Amazon. This console allows us to see all our distributions and perform configuration updates on them. It’s easy to use, and easy to get on with but, believe me, you will need more tools than this to have your distributions under real control.
The easiest way to have a distribution up and running is linking it to an existing S3 bucket using only the download configuration (not the streaming one). It costs less than a minute to set up and its results are very competitive CDN resources.
Although the web console has useful features like creating, updating basic parameters, enabling/disabling and deleting distributions, it is only the spark that can light up a candle to guide us to more complex distribution deployments.

Getting into the dark

If your aim is just to have a basic CDN to distribute public content such as design images, videos and stylesheets stop reading now. Otherwise, you can keep on reading and discover what I was able to find behind the scenes.
A usual problem using CDNs for static content distribution is that hotfixes can be allowed and deployed as long as they do not impact on the static resources. Due to the distributed architecture of the CDNs infrastructures it’s really difficult to update static resources when you are short in time (but there are no IT projects with scheduling issues, are there?).
And guess what… Amazon provides a process to speed up this kind of operation. Using the official Amazon WebServices PHP SDK there is a way to delete all the copies of static resources on all geographic locations. This allows Amazon distributed datacenters to download the new version of modified resources. This process is called invalidation and must be used carefully.
A short sample of an invalidation code:

$cloudfront = new AmazonCloudFront();
$cloudfront->set_keypair_id(‘your_keypair_id’);
$cloudfront->set_private_key(‘your_private_key_content’):

$unwanted_files = array(‘file_to_invalidate1′, ‘file_to_invalidate2′, ‘…’);
$cloudfront->create_invalidation(CLOUDFRONT_DISTRIBUTION_ID, ‘Invaliation Name’, $unwanted_files);

NOTE: This code snippet is only valid if you have previously installed the PHP AWS SDK.

And that’s all for today, in the coming weeks I will try to write a post describing the mandatory steps to set up a CloudFront distribution for private content (or not strictly private but signed).

Hope it helps! :)

Ubuntu, Acer, Broadcom’s BCM43225 and lost Bluetooth connectivity

I know that I promised to describe how to fix the function keys on a Macbook Air but I left my previous work to join an amazing project (which I hope that will see the light in a few weeks :-) ) and I had to say good bye to my precious suddenly. This basically means that I needed to step backward and fall in love again with my heavy-weighted TimelineX 3820. This Acer’s model is not a bad computer at all, but you cannot compare between a good-looking Fiat 500 and a Ferrari 458 Italia, do you?

Then, after a few minutes re-customizing the Ubuntu installation to suit my needs I realized that after upgrading the Linux kernel version (3.0), Bluetooth connectivity was gone. The OS seems to detect the Bluetooth capability off the Wireless card but there was no way to make it work again.

bluetooth_logo

I searched and searched and searched for days and I found no answer.

Sincerely, I forgot about it for weeks but last Monday I’ve bought a Belkin’s music receiver on Amazon.es. Once I had it on my hands it took me less than an hour to understand that it works flawlessly (it’s awesome, believe me) tied with my phone but it was no possible to use my laptop with it… and I have all my music stored there! I needed to fix it ASAP.

Hopelessly, I googled once again and found nothing… nothing but a weird Linux module called “acer-wmi”. This module seems to make Acer’s hardware work a little better with Linux powered laptops but I decided to blacklist this module and reboot my laptop, just to try something new. And guess what happened…

My laptop take a little bit longer to boot (that’s true) but Bluetooth, Screen Bright hotkeys, Wireless Connections hotkey and much more forgotten hardware/software capabilities came back to life without further configuration. I really don’t know what’s exactly behind this module, I’m sure that it should be useful on another Acer’s laptops but not on my TimelineX3820.

For those of you interested on disabling this module, you just need to add the line below at the end of /etc/modprobe.d/blacklist.conf and reboot your computer (there is a way to remove the module without rebooting but it’s a little bit more complex and you can find some modules that requires this one and further issues that you might want to avoid).


blacklist acer_wmi

I’m enjoying my whole music library through a Bluetooth receiver, so it works! :-)

 

Belkin’s Bluetooth Music Receiver: Amazon.es

Fix touchpad behavior in a Macbook Air (Ubuntu Linux 11.10)

I received a brand new Macbook Air this week but I had already decided that I was going to install Ubuntu to be more efficient at work (yes, I’m used to work with GNU/Linux and I didn’t want to spend a few hours getting on with OS X Lion).

In order to make everything work I followed step by step* instructions available on the Ubuntu community page about my concrete Macbook model (https://help.ubuntu.com/community/MacBookAir4-2) and everything seems to work as expected… but the touchpad.

The touchpad way to work had two issues that really annoyed me:

    • The first one was that I could not disable the tap-to-click feature, which is very annoying when you try to write with the built-in keyboard.

 

  • The second one is that the scrolling behavior was reversed to work like it does by default in the OS X Lion. Ok, this is not as annoying as the first issue but this behavior affected the external mouse scroll-wheel too.

 

To fix these issues you need to edit the /etc/X11/xorg.conf file and add these three lines in the touchpad section:

Option "MaxTapTime" "0"
Option "ScrollUpButton" "5"
Option "ScrollDownButton" "4"

So, you will have something like:

Section "InputClass"
    Identifier       "Multitouch Touchpad"
    Driver           "mtrack"
    MatchDevicePath  "/dev/input/event*"
    MatchIsTouchpad  "on"
    Option           "CorePointer"     "true"
    Option           "Sensitivity"     "0.65"  #    1 : movement speed
    Option           "ScrollDistance"  "100"   #  150 : two-finger drag dist for click
    Option           "ClickTime"       "25"    #   50 : millisec to hold emulated click
    Option      "MaxTapTime"      "0"
    Option      "ScrollUpButton" "5"
    Option      "ScrollDownButton" "4"
EndSection

Once you have updated this file, you will need to restart your computer or restart the lightdm service (service lightdm restart in the command line) to make these changes effective.

I hope that this short post helps you in some way and keep in touch cause I’m going to explain how to fix the function keys behavior in a few days :-)

* NOTE: I must say that I’ve decided to not set up a swap partition but a swap file in order to make it easier to manage partitions both from Ubuntu and Mac OS (I usually love to have separate partitions for home, swap and boot but it’s not as easy in a Macbook, believe me, I’ve done it before in a Macbook 2,1). Another thing that I’ve also modified is the swappiness parameter but it’s something personal, it has no relation with maintenance issues.

Macbook Touchpad

Hello world! (once again)

Hi there!

I’ve been writing on iXavi since two years ago but I’ve decided that it’s time to start to write some of my posts to a wider audience.

This blog had started as a semi-professional blog about web programming and it slowly turns into a personal blog about things I like and tech recommendations about things I’ve learnt during these years. In order to keep everything as clear as possible, I’ve created this subdomain (en.ixavi.com) and will be posting in English here and continue writing in Spanish in the old domain (ixavi.com).

Before closing this initial post, I also want to let you know that I’ve opened another subdomain where I will be posting in Catalan (cat.ixavi.com). It’s probable that I won’t upload tech-related content in this site and let my politic and economical opinions flow to the front page of cat.ixavi.com but, anyway, it will replace my old personal site, xaviland.com.

After these introductory lines, welcome all!