Cloudflare updater script

Today I’m going to bring to you a simple Php script to update a DNS entry for a domain that is under Cloudflare; this way Cloudflare can became a sort of Dynamic DNS service like No-IP or DynDNS, with the difference that you can use your own domain/subdomain (for example Unlike other services Cloudflare has a free plan and also a nice set of API that allows you to easily change ip (and other things).

If you need an explanation of what is a Dynamic DNS I’ll leave here the wikipedia page ( ) but to make things short it’s a way to access a machine (a server, NAS or simply a Raspberry Pi) connected to the Internet without knowing the IP, and this IP changes over time so you can’t just add it do a static DNS entry.

After you created an account on Cloudflare and added you domain (I’m skipping this part because their tutorials are way better 😛 ) the first thing we need to do is to obtain an API key. First step, from the top menu click on the email and click on “My Setting”.



Then scroll all the way down until you see the API Key section and click on “View API key”.


Finally a popup will show, remember that the API key is like a password so it should never be known by other people (from the previous page you can regenerate another one if needed).Cattura3

Now that we have an API key we can start modifing the script:

  • line 8: you can change the default timezone (here’s a list of supported timezones
  • line 11: $apiKey this is the key we just obtained
  • line 12: $myDomain the domain that we want to update (for example:; this domain needs to be configured on Cloudflare panel
  • line 13: $emailAddress the email address of your account
  • line 14: $ddnsAddresses an array that contains the subdomains that needs to be updated with the current ip of the machine that is executing the script (for example
  • line 17: a simple webservice that returns the current ip of the machine without any markup (feel free to use the default one)
  • line 20: the url of the API endpoint, this should not be changed unless Cloudflare changes it
  • line 23: the path of the log where you can find the updates of the IP (needs to be writable so give 777 or other permissions)
  • line 24: the path of the error log (needs to be writable so give 777 or other permissions)

This script needs only Php to be installed on the server/nas/whatever and should be added in crontab to execute every few minutes (for example every five minutes). The script automatically checks if the IP has changed and if there’s no connection so there shouldn’t any problems with API usage.

Feel free to contact me on my email or by posting a comment in the box below if you encounter any problems, I’ll try to respond to all the comments/emails.


Server uptime monitoring using UptimeRobot

Today I’m going to present you a simple script that uses the UptimeRobot APIs; UptimeRobot is an awesome service that, for free, allows you to check the uptime of your servers (max 50) with a check based on a webpage or an open listening port on the server with a minimum delay between the checks of 5 minutes (or higher). It also supports webhooks, basically when one of your server goes offline UptimeRobot makes a call to url. I use this feature to send an SMS to my telephone using a paid SMS Gateway (unsupported by UptimeRobot). For more infomation visit their website:

Anyway, I created this script to present some statistics on the servers monitored such as the uptime in the last days, the response time during the last 24 hours and more important than everything else the current status of the server (Online or not).

The script is configurable, you can:

  • select which server to add: UptimeRobots supports two types of API keys, a private one that allows access to all the monitors inside your account and a public key that allows access to the statistics of a specific monitor; it’s up to you to choose which key to use, but keep in mind that you can’t add the same monitor two times;
  • choose if you want to see the response time of the last 24 hours and how precisely you want to see it, take in mind that by enabling this feature the loading time of the page will be higher (no problems with 4-5 monitors); the precision is indicated in minutes, for example 30 minutes mean that in the graph there will be the response time every 30 minutes, I prefer 120 minutes to avoid spikes in the graph;
  • regarding the uptime statistics, select the period used to calculate the uptime ratio, by default is last day, last week and last month: 1, 7 and 30 days; you can add other values such as ‘1-7-30-60’.

You can see the script in action here Status page. As you can see the script is fully responsive, if it works in every browser at every resolution 😛
I left some comments in the script so you should be able to customize it (like I did to add the munin graphs and Laura Bodewig, the anime girl in the background).

Now to the download links, remember to edit the config file. Feel free to contact me if anything goes wrong.

I also used these awesome libraries:

Android Java

Android: WebView and “date” input type

During the porting of an application from iPad to Android I found a problem within a webpage used to register a user on an external service. This page uses some html5 tags that are not compatible with some Android devices, in particular the “date” input tag.

The major problem is that I can’t modify the page but instead I have to do something to show the user a datepicker. Then I thought that I can use a jqueryui’s datepicker, after all I just need to inject a javascript call into this page and declare that this field needs to be initialized as a datepicker.

webView = (WebView) findViewById(;
webView.setWebViewClient(new WebViewClient(){

        public void onPageFinished(WebView view, String url){
	        webView.loadUrl("javascript: $(\".form-date\").datepicker({ dateFormat: 'yy-mm-dd', changeMonth: true, changeYear: true yearRange: \"-100:-10\"}).datepicker(\"option\", $.datepicker.regional[ \"it\" ]);"
                                                 + "$(\".form-date\").val(\"1990-01-01\");"
                                                 + "$(\"#form-registrazione\").get(0).setAttribute('action', '" + Singleton.getInstance).getConfig().getUrlRegistration() + "');");

new LoadPageTask().execute();

As you can see I just find the webview by it’s id, then I need to enable the javascript support and then declare a new WebViewClient in order to inject the javascript after the page is loaded. In order to leave the original scripts intact I choose this way, because as you can see below I need to reinclude the original scripts from the assets.

To load a javascript you can use the loadUrl method by using “javascript:” aht the beginning of the string. Like this: webView.loadUrl(“javascript:some_javascript_operations”);

The javascript operations are some simple jquery function that initializes a field with class “form-date” with a datepicker with some custom options. The last line will change the form’s action url, this is needed because normally that url contains a relative path to the server, but since the basepath will be inside the device it’s necessary to change the url with the fully qualified one retrieved from the APIs (the Singleton.getInstance().getConfig().getUrlRegistration() call to retrive the correct url).

Then the last thing to do is to launch a thread that will load the page into the webview. The thread is a pretty standard implementation of AsyncTask class with a progress bar and with an http connection.

private class LoadPageTask extends AsyncTask<Void, Void, Void>{

		private ProgressDialog progressDialog;

                String htmlcode = "";

		protected void onPreExecute(){
			//create a new progress dialog
			progressDialog =, "", "Loading...", true);

		protected Void doInBackground(Void... params) {

			Document doc;
	        try {
	            doc = Jsoup.connect(Singleton.getInstance().getRegistrationURL()).userAgent("Mozilla/5.0 Gecko/20100101 Firefox/21.0").get();

	            // remove all the css and js import from the documet
	            // add the new imports from the assets folder
	            doc.head().appendElement("link").attr("rel", "stylesheet").attr("type", "text/css").attr("href", "css/jquery-ui-1.10.3.custom.min.css");
	            doc.head().appendElement("link").attr("rel", "stylesheet").attr("type", "text/css").attr("href", "main.css");
	            doc.head().appendElement("script").attr("type", "text/javascript").attr("src", "jquery-1.9.1.js");
	            doc.head().appendElement("script").attr("type", "text/javascript").attr("src", "jquery-ui-1.10.3.custom.min.js");
	            doc.head().appendElement("script").attr("type", "text/javascript").attr("src", "common.js");
	            // convert the stream to html
	            htmlcode = doc.outerHtml();

	        } catch (IOException e) {

			return null;

		protected void onPostExecute(Void result){
			//load the content into the webview using as basepath the folder that contains the scripts
			webView.loadDataWithBaseURL("file:///android_asset/registrazione/", htmlcode, "text/html", "UTF-8", null);
                        //then remove the progress dialog


In order to remove the old imports from the document and then add the new ones, I used the jsoup library (


Setup a cache proxy with Squid

Today I’m going to explain how to setup a cache proxy within your local network. A cache proxy is a system that stores frequently accessed web objects for a fast retrieval, it works well with static contents such as html pages, css scripts, javascripts, images and even downloaded files if correctly configured.

This approach has some advantages:

  • on a congested network you can still open webpages faster because some contents doesn’t need to be retrieved from the internet but from a local cache (within your local network);
  • you can install a parental control and/or an antivirus to check what pages can be opened from the computers within the network (and properly configured to use the proxy).

Obviously there are some disadvantages, such as the fact that you can’t be sure that the cached objects are fresh (not changed) so you can encounter strange problems with websites; you can also encounter some problems with audio/video contents. Some of these problems can be avoided with properly configurations.

Let’s start with the installation and configuration of Squid on a home-server based on ArchLinux: the procedure is almost the same with other distributions.

First download the package using your package manager (pacman if you’re using ArchLinux):

pacman -S squid

Then you need to configure squid, to do so open /etc/squid.conf and read carefully the comments. There are a lot of options but you really need to check and change only a few of them:

  • http_port: the port where squid will listen for request, usually 3128 but you can change it without problems;
  • http_access: these lines defines the access permissions to the proxy, usually you want to allow access for localhost and localnet and then deny the access for everything else. To do so (it should be already into the default configuration file):
    # Define what is localnet
    acl localnet src
    acl localnet src
    acl localnet src
    acl localnet src fc00::/7
    acl localnet src fe80::/10
    # Enable localhost and localnet
    http_access allow localnet
    http_access allow localhost
    # And finally deny all other access to this proxy
    http_access deny all
  • cache_mgr: the email address for the cache_manager;
  • shutdown_lifetime: defines the time to wait until the service is stopped when required;
  • cache_mem: the memory (RAM) used as a buffer for requests: at least 256/512MB to have decent performance;
  • visible_hostname:  the hostname of your server;
  • fqdncache_size: the size of the resolved domain cache, use at least 1024;
  • maximum_object_size: the maximum size of objects in the cache, set this at least to 10MB otherwise you’ll only cache small files (no large images for example);
  • cache_dir: the location of the cache, this parameter is quite complex. It’s defined as:
    cache_dir ufs /var/cache/squid 20000 16 256: first the file system (ufs), then the location of the cache(/var/cache/squid), then the maximum size (20000MB or ~20Gb), then the number of folder at the first level (16) and finally the number of folder at the second level (256). To be honest you just have to change the maximum size to a serious amount such as 20-100GB. More cache means more files that doesn’t need to be retrieved from the internet.

After these initial configuration, where only two directly affect efficacy (cache_mem and cache_dir) there are some really important configuration that Squid uses to understand what and how the elements have to be cached.

The directives  uses a pattern that matches the objects by extension and/or name, then a minimum and maximum lifetime and a % is used to statistically determine when an item is stale and needs to be discarded, for example:

  • 10080 90% 43200: this means that the item is considered fresh if his time is between now and 10080 (3 hours) seconds ago, stale (discarded) if his time is older than 43200 (12 hours) second, if the time is between 10800 seconds and 43200 seconds the item is fresh with a 90% probably (high);
  • 1440 20% 10080: same as above, if time is less than 1440 the item is fresh, if the time is higher than 10800 the item is stale and finally if the time is between 1440 and 10080 the item is fresh with a 20% probability (low).

High % means that an object is unlikely to change, low % should be used for items that probably will ofter change. This is not an exact science, if an element changes (such as a new css of a newer version of a javasript) you still may load is your browser the older version, then never use a too long time (a day, not more). Be sure to read the official documentation for a more in-depth explanation:

My configuration is:

refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 10080 90% 43200 override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv)$ 43200 90% 432000 override-expire ignore-no-cache ignore-no-store ignore-no-private
refresh_pattern -i \.(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|doc|tiff)$ 10080 90% 43200 override-expire ignore-no-cache ignore-no-store ignore-no-private
refresh_pattern -i \.index.(html|htm)$ 0 40% 10080
refresh_pattern -i \.(html|htm|css|js)$ 1440 40% 40320
refresh_pattern . 0 40% 40320

Let’s examine it line by line:

  • ftp are fresh under 1440 seconds and stale after 10080 but they are likely to change (20%)
  • gopher are fresh under 1440 and then stale
  • cgi-bin (scripts such as php) are never cached because you know, they change every time…
  • images are fresh under 10080 and stale after 43200 and they are unlikely to change (90%)
  • videos  are fresh under 43200 and stale after 432000 (5 days) and they are unlikely to change (90%)
  • archives are fresh under 10080 and stale after 43200 and they are unlikely to change (90%)
  • index pages of some sites are fresh until 10080 seconds
  • other html pages, css and javascript scripts are fresh under 1440 second and they stale after 40320
  • all other things are never fresh and they stale after 40320

For strange cases, such as windows update archives, you can find on the internet the line/s that you need to add. Keep in mind that the first line that matches is used so you need to order the rules in reverse order.

Finally enable and start the daemon. On ArchLinux, that uses systemd, this can be accomplished with these two commands:

systemctl enable squid
systemctl stop squid

A few final considerations:

  • install on your server a tool like webmin, this way you can check squid’s statistics to see the cache hit %;
  • remember that browser cache may alter the statistics since the object is retrieved locally and not on the squid cache, for testing purposes disable the browser cache and then set it to a lower amount (ssd disks will benefit and you save space);
  • more computers uses the caches, more the cache is fresh and more you can expect a higher cache-hit %;
  • to avoid over-kill Squid should be used on networks that have at least 2-3 computer, otherwise you’ll benefit only because you can have a huge cache (gigabytes not megabytes);
  • cache-hit should be at least 15-20% but don’t expect values such as 80-90% because https is never cached (and it’s better this way since to enable https you have to do things that it’s better to not do) and because not all the objects can be cached (such as php pages).

Next time I’ll show you how to configure and install an antivirus layer using clamav. As always if you have any questions feel free to contact me using the comments below 🙂


A simple script to backup all mysql databases into separate files

Hi, today I’m going to present you a very simple script that I use to backup all my mysql databases into separate files compressed with gz compression. Doing this way, to restore a database you just need to extract the dump from the file and restore it with this command:

mysql -uUSER -pPASSWORD DB_NAME < db_dump.sql

The script is the following:


#       croma25td simple mysql backup script v1.0
#       croma25td at gmail dot com

# User defined variables
# Mysql user with read privileges on all databases

# The current date
date=$(date +%Y-%m-%d_%H:%M:%S)

# The folder that will cointain all the backups
# End user defined variables

# If the output folder doesn't exists create it
test -d $BACKUP_DIR || mkdir -p $BACKUP_DIR
# Get the database list, excluding some db names
for db in $(mysql -B -s -u $MYSQL_USER --password=$MYSQL_PASS -e 'show databases' | grep -vE '(information_schema|performance_schema|mysql)' )
  # dump each database in a separate file
  mysqldump -u $MYSQL_USER --password=$MYSQL_PASS $db | gzip > $BACKUP_DIR/$db.sql.gz

First there are some user defined variables:

  • MYSQL_USER and MYSQL_PASS: the username and the password of a user with read privileges on all the database to include in the backup;
  • date: it’s simply the current date/time, used to easily identify different backups;
  • BACKUP_DIR: the folder where the backups will be saved, I used a /backup folder and every backup is within a folder named as the current date. This will create a structure like this:

Then comes the script:

  • first there is a check on the output folder, to verify that exists and otherwise to create it;
  • then we obtain all the databases names, in two different ways:
    • if you want to exclude some database include their names within the grep -vE command using | as separator: mysql -B -s -u $MYSQL_USER –password=$MYSQL_PASS -e ‘show databases’ | grep -vE ‘(information_schema|performance_schema|mysql)
    •  otherwise to get all the names: mysql -B -s -u $MYSQL_USER –password=$MYSQL_PASS -e ‘show databases’
  • lastly mysqldump will create a backup for all the databases and gzip will compress it into a single file.

As I mentioned you need a user with read privileges on all the databases, and forget about using root user 🙂

So, to create a user just use these commands:

  • open a mysql console using the root account: mysql -uroot -p
  • create the user with name USER and with a PASSWORD:GRANT LOCK TABLES, SELECT ON *.* TO ‘USER’@’%’ IDENTIFIED BY ‘PASSWORD’;
  • then finally flush the privileges:flush privileges;
  • close the mysql console with \q , make the script executable with: chmod +x  and test the script.

Now you also may want to add a crontab entry to execute it automatically.

As always if you have any questions of suggestions just use the comments below :)