Archive for the ‘Uncategorized’ Category

Growing across borders

by Ronnie Teo, bizhive@theborneopost.com. Posted on March 18, 2012, Sunday

With competition in the Malaysian banking scene intensifying, local banking groups will continue to seek opportunities offshore for further growth opportunities.

MORE URGENT: The need for the financial sector to cross international borders becomes more urgent as the regional and international complexity of Malaysias financial system continues to transform in light of global financial updates.

With Asias economic development set to accelerate over the coming decade on the back of increasing income levels and technological advancements, Malaysian banks now see a push for regional integration as the next stepping stone for growth.

As a matter of fact, the need for the financial sector to cross international borders becomes more urgent as the regional and international complexity of Malaysias financial system continues to transform in light of global financial updates.

The forces of change are becoming more apparent with factors such as demographic changes, greater requirements for infrastructure development and the emergence of industries powered by innovation that have hastened regional integrations.

Thus, the Financial Sector Blueprint 2011-2020 (FSB) came to light.

Launched by Prime Minister Datuk Seri Najib Tun Razak in December last year, this new blueprint takes over from the 10-year Financial Services Masterplan which ended after laying a strong foundation for the countrys banking and financial sector.

The move endeavours to mobilise regional funds and promote resource allocation to investments efficiently.

I want to see our financial sector playing a key role in the cross-border intermediation of Asias financial funds, with Malaysias financial institutions continuing to venture abroad and to replicate their domestic successes in these new markets, stressed Najib during the launch.

See original here:
Growing across borders

Dot Com Pho – OMG! It’s Raining In The OC Edition

Believe it or not, it does rain in the OC. Its a rare event but when it happens, Orange County does a fairly good imitation of what life is like for the Vancouver Dot Com crew. Not only did the rain drove us out of the Pho Ba Co patio, it drove us out of Pho Ba Co completely. The inside of the Pho Ba Co didnt have enough seats to accommodate our group. We had to go next door to Las Fajitas Mexican Grill.

We had a full house ofಒ people making out to Dot Com fiesta. We created one long table near the wall to seat everyone. If you would like to join us for a future Dot Com Pho meetup, follow me on Twitter to find the time and place. We do this every Saturday (unless Im out of town).

For this edition of Dot Com Pho, we check out the new Apple iPad, test the power of Verizon 4G LTE, meet the lady whos wanted in six States, try to take a picture ofthe bow tie man without his bow tie, hot Mexican food, and a whole lot more. Enjoy and well see you next week!

This article courtesy of Dot Com Pho OMG! Its Raining In The OC Edition

Read this article:
Dot Com Pho – OMG! It’s Raining In The OC Edition

7 Welcome To London | The Story So Far #4 – Video

17-03-2012 06:13 IndyBrown.TV brings you the story behind the British Hindi thriller '7 Welcome To London'. In this episode, see exclusive coverage of the world premiere of '7 Welcome To London' which took place at Cineworld Ilford. Find out more about '7 Welcome To London' on: http://www.facebook.com http://www.twitter.com http://www.7wtlfilm.com Stay in the loop with our shows Subscribe for more on http://www.youtube.com Like us on http://www.facebook.com Follow us on http://www.twitter.com http://WWW.INDYBROWN.TV

See the original post here:
7 Welcome To London | The Story So Far #4 - Video

Internet providers map plan to 'sink' pirates

Comcast, Cablevision, Verizon, Time Warner Cable and other Internet service providers (ISPs) in the United States will soon launch new programs to police their networks in an effort to catch digital pirates and stop illegal file-sharing.

Major ISPs announced last summer that they had agreed to take new measures in an effort to prevent subscribers from illegally downloading copyrighted material, but the specifics surrounding the imminent antipiracy measures were not made available. Now, RIAA chief executive Cary Sherman has said that ISPs are ready to begin their efforts to curtail illegal movie, music and software downloads on July 12.

Each ISP has to develop their infrastructure for automating the system, Sherman said during a talk at the annual Association of American Publishers meeting, according to CNET. Measures will also be taken to establish databases so they can keep track of repeat infringers, so they know that this is the first notice or the third notice. Every ISP has to do it differently depending on the architecture of its particular network. Some are nearing completion and others are a little further from completion.

Customers found to be illegally downloading copyrighted material will first receive one or two notifications from their ISPs, essentially stating that they have been caught. If the illegal downloads continue, subscribers will receive a new notice requesting acknowledgement that the notice has been received. Subsequent offenses can then result in bandwidth throttling and even service suspension.

The news comes shortly after the closure of file-sharing giant Megaupload and increased pressure on other networks thought to be major hubs for the illegal distribution of copyrighted materials. Some studies show that these measures have had no impact on piracy, however, so organizations like the RIAA have been lobbying for ISPs to intervene and develop systems that will allow them to police their networks and directly address subscribers who illegally download copyrighted content.

This content was originally published on BGR.com

More news from BGR: - Anonymous OS is fake and packed with malware, hacker group says - People want supersized cell phones, new study shows - Apple sings a familar tune, seeks to ban Samsung in Japan

Read more:
Internet providers map plan to 'sink' pirates

Google Webmaster Tools Crawl Errors: How To Get Detailed Data From the API

Earlier this week, I wrote about my disappointment that granular data (the number of URLs reported, the specifics of the errors) was removed from Google webmaster tools. However, as Ive been talking with Google, Ive discovered that much of this detail is still available via the GData API. That this detail was available through the API wasnt at all obvious to me from reading their blog post about the changes. The post included the following:

And led me to believe that the current API would only provide access to the same data available from the downloads from the UI. But in any case, up to 100,000 URLs for each error and the details of most of what has gone missing is in fact available through the API now, so rejoice!

The data is a little tricky to get to and the specifics of whats available varies based on how you retrieve it. Two different types of files are available that provide detail about crawl errors:

(Thanks toRyan JonesandRyan Smithfor help in tracking these details down.)

What this means is that different slices of data are available in four ways:

What youre able to see about each error is different based on how you access it.

Eight CSV files are available through the API (you can download them all for a single site or for all sites in your account at once as well as just a specific CSV and a specific date range), but this support is not built into most of the available client libraries. Youll need to build it in yourself or use the PHP client library(which seems to be the only one that has support built in). The CSV files are:

For the topic at hand, lets dive into the crawl errors CSV. It contains the following data:

This file does not include details on crawl error sources (but that is available through the crawl errors feed, described below).

It appears that thecrawl errors feedrequest code is built into theJava andObjective Cclient libraries, but youll have to write your own code to request this if youre using a different library. You can fetch 25 errors at a time and programmatically loop through them all. The information returned is in the following format:

Originally posted here:
Google Webmaster Tools Crawl Errors: How To Get Detailed Data From the API