Apple moments for APP.net

As expected, yesterday’s tech news was largely dominated by the Apple iPhone announcement. But what was interesting to see was the effect the announcements had on the app.net social network.

For those of you whom are not aware, APP.NET is a Twitter clone that is user funded. Recently closing a kickstarter style project to raise 500K to get the service bootstrapped.

The method we used was only monitor newposts (Similar to tweets but longer) and send the resulting post to Rush Hour for analysis in real time. Development time was circa 40 min last Sunday to integrate with Rush Hour. We do fail on some posts, but theses are just a handful and well within the +/- 3% deviance we permit ourselves at this stage.

Disclaimer

We’re still building Rush Hour and as such consider all these numbers below as provisional. While we’re pretty confident we are seeing everything, beta means bugs, things get missed, twisted etc. So please do take these numbers with a pinch of salt.

Service Usage

The hours up to the event were pretty normal usage for app.net. Keep in mind, its users are early adopters and limited in numbers. The last reported figures being north of 17,500 (https://github.com/appdotnet/api-spec/wiki/Frequently-Asked-Questions#wiki-howmanyalphausersarethere)

During the day excluding UTC 17:00 and UTC 18:00, the time of the Apple event, the average posts per hour (PPH) were approx. 492. Including the posts made during the event, the average PPH increased to 593. This is a rise of circa 20.5% in activity.

Taking the same hours from the previous day, there was an approximate combined total posting of 1266. During the apple event, the combined postings was approximate 3418 postings. That’s an incredible increase of 169.98%.

Engagements

What we also measure is user engagements. The previous day, the average over the same hours was approx. 944. During the ever, engagements increased to 2582.5. This is an increase of 173%

We also see that user-to-user engagements, (Better known as mentions where a person @’s someone to engage in a conversation) increased by 64.7% from the same period the previous day. During the hours on Tuesday, there were an average of 702.5 user-2-user interactions while during the event, the average increased to 1156.5.

The top users other users were engaging with during the event were;

@parker
@po
@cats
@trine
@sham

Clients

App.net is still quite new, and therefore the client base is quite small, approx. 75 different clients/scripts using it. In keeping with the general daily trend IFTTT (If This Then That) seems to be the client of choice for most appnetters. Thus indicating that users are cross posting to other social media and not considering App.net as their primary social conversation point.

IFTTT
Aplha (Main website of app.net client)
Apptizer
quickApp
Mention
#moApp
Buffer
Dabr.EU
Adian
#PAN

Other Points of Interest

Hash tag usage perhaps was the biggest increase. There was a 366.66% increase in tag usage from the same period the previous day. Top 5 hash tags during the two hours were;

ParkeriPhoneLiveBlog
iphone5
Apple
tgiphone5
keynote

Link sharing however increased only by 7.5% during the event from the previous day. Top 4 links shared during the two-hour event were largely unrelated to the Apple event.

http://App.net
http://TWiT.TV
http://blog.app.net/blog/2012/09/12/quick-update-stars-and-settings/
http://t.co/2cnnJYdN
http://appnetstats.com/

Conclusion

What we can see here is clearly the effect an Apple event has on a social network. It would be interesting to compare these figures against those from Twitter and Facebook. Alas, having been requested by Twitter to cease and desist from performing analytics on their service, we are unable to give you a comparison to a more popular service.

What we can tell for app.net is that its fledgling service is doing okay with its community. While the figure of 17500 registered users in August is disclosed, our analysis indicates that approx. 11.4% of these were active yesterday, 8.9% the previous day. While this figure may seem small, from our previous research on Twitter, this would hold to be a reasonable engagement rate. What is really positive that only after a few weeks in service, there appears to be a loyal community being built up around the service, with an engagement rate not dissimilar to other larger social networks at further period in their development.

Irish Web Awards & IT@Cork Nominations

Well, I’ve been very busy of late converting Druid DNS over to Oggim DNS. That’s going well during the day and I am glad to say its coming along very nicely at present. Still have lots of work to do on the UI, but the port to Zend Framework is going very well.

At the same time I am busy writing Rush Hour, our brand spanking new whatever analytics package (which is moving leaps and bounds now). Out of the blue I got two rather surprising and humbling emails. Continue reading

Tweetrush detects its first million tweets in one day

Its been a busy day for both Twitter & Rush Hour powered Tweetrush. Thats right, the first million barrier has been busted. Check out the Tweetrush stats for 27th August 2008.

While congrats go to Twitter for coping with the extra traffic while the DNC was on, congrats also go to Gnip, our feed provider and the Rush Hour team for desiging such a stable system considering the traffic and data we get and analyse. Its no mean thing to do, and I see the DB’s that are involved so its certainly a lot of fun.

I’m off line for a few days while I attend a family event, however I promise to respond to all RH mails when I come back. In the meantime here is a another great article on a use for Tweetrush.

Tweetrush.com is live …Thank You to…

Well, what an exciting few days it has been here. TweetRush.com, the first Rush Hour proof of concept site is up and running along and so far seems to be holding fast. I’m taking a break tonight and actually sleeping for a change but rest assured all mails and tweets will be answered.

What is TweetRush about? Well its a demo of a new product we’re building that aims to take the pain out of stats for web app builders and is based on actions and events. But more on that will be released as we get closer to things getting out the door.

We put TweetRush up to get an idea of how well the engine would perform against a high volume site. Sure there are few issues mostly around the fact that we don’t get a direct Twitter feed, but over all its not bad. We hope once all is calm, to establish a good contact with Twitter and maybe a XMPP feed and see what we can both come up with. Do remember, this is still a work in progress :)

Now Rush Hour and the Tweet Rush spin off are not just my babies, but also  James, Walter, Grzegorz and Slawomir babies too. We have all equally put a massive effort in to get this far. Its been great to be part of a very talented team. There is also Adam, but he was busy doing marriage or something crazy so we excused him for a while, no escaping us now Adam now that you know we mean business :) Without all of these great people involved we would never have gotten this far at all.

I am also delighted to say we featured on TechCrunch, thank you for the exposure Michael Arrington! Thats another first I think for all of us on the team. Well since then, we’ve been inundated with greetings and well wishes as well as many questions too. Site accesses have gone through the roof (and a big thank you Donncha for the advice on what to expect DB wise). All feedback has been great to get, allowing us to enjoy the moment, but also to look deeper into our application and the Tweet Rush implementation to see what else we can do to improve its accuracy as well as seeing what peoples expectations are.

Finally there are four other people I must say thank you to as without them, nothing would have happened either. Patrick Buttimer of Eirteic Consulting for being just absolutley great.  Justyna for being there and not walking out after I deserted her for weeks to work on systems and code. She’s a babe that rocks! Damien, of Mulley Communications, the PR was excellent, and support through the highs and lows, nerves and all has been second to none. Finally the growing legend that Pat Phelan of Twitterfone/Max Roam/ … (endless list of great businesses) is. His advice via tweet tennis (Is that a new term Damien?) and help in reaching people has been great. A real genuine gentleman.

Okay so sleep beckons now, I’m really hoping nobody diggs us now, being TC’d is enough for the one day :) More to come on Rush Hour later after I sleep.

P.S. For the PHP peeps, of course Tweetrush was written in PHP using the Zend Framework :) Seriously it rocks.

Interesting Graph

Stas from Rush Hour Engine

Stats from Rush Hour

These stats were captured via the impending Rush Hour engine that myself and a few others have been developing. More info on where they come from later in the week, or eary next week. For the moment, lets just say big DB’s rock, even bigger machines rock harder and see if we can guess where they come from.

For every action there can be an equall reaction!

Impresive MySQL stats

So I’m doing something that relies on MySQL a lot. Last night I had to test a rebuild process that we would use if there was a failure in our systems. Worst case secnairo would be we would have lost about a weeks data and need to reinsert the whole lot.

So I wrote a quick and nasty PHP script with multiple nested foreach() loops, yeah I know what your thinking, but as I said, it was a quick and nasty script. Basically all this script done was parse a directory full off XML files and based upon the content, perform multiple SELECT, INSERT or UPDATE based on the data per element in the given XML file.

So, on my little iMac (24″, 4GB RAM, 1TB Drive) the following are the stats from the completition of the script.

Total Processing Time: 4975.59566307 seconds.
Total DB Inserts: 1,961,000
Total Selects: 12,035,743
Total Updates: 10,465,071

This all translates to;

Inserts Per Second: 394.12
Selects Per Second: 2,418.95
Updates Per Second: 2,103.28

Keep in mind, I was watching a movie at the same time and that there has been no optimisation carried out on the server at all (Although the DB is fairly well normalised). Table format was MyISAM.

Now imagine what MySQL could do on an optimised Linux server with custom optimisations to make it faster. I am quite impresed given the workstation I was working on and the complexity of the queries we are doing. Pat on head for MySQL. Well done.

If you use MySQL….

If like me you use MySQL yet don’t pay for commercial support (There is nothing wrong with that!), there is another way to contribute. A MySQL developer finds himself in an bad situation and needs a little bit of help from us all. Normally I would not bother with such things, but then again, I do use MySQL a lot and I am dependant on it and this guy helped devlop it! But there is also a kid involved, if it were your kid, you’d do anything to save them. Anyhow, if you can give a donation as little or as large as you like, every little bit helps. Think about it, if every WordPress blogger gave just $1 dollar, the medical fund would be rasied very quickly. Decision is yours, just my little nudge :). Ta

Network Manager & Open VPN Servers On Port 443 Fix

Now, after yesterdays post, I decided to play a bit more and set up an Open VPN connection to a remote site. Luckily the site was running the excellent IPCOP with an Open VPN service, however it was configured to run on port 443.

Searching high and low in the Network Manager applet on Ubuntu Heron, I could not find an option to connect on a specific port number. Bruised, I dumped my Open VPN config into /etc/openvpn and brought up the link manually. Continue reading

Installing the Groupwise Client on Ubuntu Hardy Heron

So we use GroupWise obviously. However for the most part, I spend my time on Ubuntu these days as I like XFCE4 and Ubuntu as a whole. This is not to say I don’t use my Novell Suse Enterprise Desktop, I do, quite a lot, but its also nice to have a bit of a play with other distros from time to time.

Continue reading

Running Confluence on port 80 or 443 using mod_proxy_ajp

Okay, I use Confluence a lot, and think despite some of its failings in the UI department (Although they are being addressed as version 2.8 shows) its a excellent Enterprise Wiki.

Now, not wanting to arse around with Tomcat all the time, I use the standalone build generally as its quite sufficient for my purposes. The problems arise when we have remote workers, whom are behind restrictive firewall policies. So that means I have to provide the service on 443. Now you could spend a bit of time configuring Tomcat to run on said port, but thats not recommended, plus you may want to use other technologies such a PHP etc. So here is a quick how to on getting confluence up and running on port 443 on Suse Enterprise Server 10 (Al tough the same applies to most Linux distros with the exception of the convoluted config Novell apply to Apache). This post presumes you already have installed Confluence standalone and its running fine. Continue reading