Skip to content

CyberSecurity Institute

Security News Curated from across the world

Menu
Menu

Author: admini

Review: Cloud automation tools

Posted on June 7, 2010December 30, 2021 by admini

In this groundbreaking test, we looked at products from RightScale, Appistry and Tap In Systems that automate and manage launching the job, scaling cloud resources, making sure you get the results you want, storing those results, then shutting down the pay-as-you-go process. We found that each product gets to the finish line, but they each require some level of custom code and they each take different, vastly circuitous routes.

We liked RightScale’s ability to both monitor and control application use, as well as its wide base of template controls and thoughtfulness for overall control. RightScale’s RightGrid methodology manages the full life cycle of apps and instances, and gave us the feeling that hardware could truly be disposable in the cloud. Yet with a bit of work, we found that both Appistry and Tap In Systems offered task automation components that could also be successful for cloud-based jobs.

In our first public cloud management test, we focused on the ability of products from RightScale, Tap In Systems and Cloudkick to simply monitor public clouds like Amazon’s EC2, GoGrid and Rackspace.

This time around, our test bed was narrowed to Amazon’s public cloud, and we used a variety of Amazon cloud services, including Elastic Compute Cloud (EC2) server resources, Simple Queue Service (SQS) queuing system, and Simple Storage Service (S3).

The good news for enterprises is that Amazon’s pay-per-usage model can be a major cost saver. In this real-world test, we were able to complete our tasks using an extraordinarily low amount of our Amazon processing budget. Similar batch job cost savings can be realized using Amazon competitors like GoGrid, RackSpace and others, but only if the tasks are automated using the type of cloud management tools that we tested here.

The basic procedure was similar for all three products A job that needs to be performed requires application code, data files, a place to process the data (the cloud), and a place to put the results.

There are two options: make the job into a bundle where we could define code, data, outputs and options, or do that plus have a controller get messages from the job in progress. That allowed us to take either pre-defined actions based on the messages, or allowed us to change what happens in the middle of a job. First, we needed to create an Amazon execution image with the applications we would be using for automation. We chose ffmpeg, an application suited towards video rendering jobs for processing by arrays. Once created, we bundled the image and uploaded it to Amazon so we would have a copy to start with.

Each product then varied in terms of controlling the life cycle of the bundle.

Typically, the life cycle is the sequence of events that coordinates the process of doing jobs, gathering the results, storing them, and reporting success/failure (there will inevitably be both). We gauged success by the degree of built-in controls, application customization that was necessary, how the management application would either programmatically or automatically scale resources to execute the job (by reading CPU or other resources, then adjusting jobs to add servers or resources), and communicating messages among job executors and coordinating processes.

RightScale’s flexibility became readily apparent early in our testing. RightScale’s ServerTemplates can be modified, and the orchestration needed to perform jobs from beginning to end doesn’t require bundling all components prior to job execution, as the other vendors did. By modifying the ServerTemplates, we didn’t need to create our own bundled image on Amazon using their EC2 tools, in effect, making the process that much simpler. But, like the other cloud management providers we tested, RightScale requires a bit of scripting work to make it useful.

The first type of server array is queue-controlled. We found it’s easiest to use RightScale’s pre-made configuration message encoding system, which is written in Ruby. Workers are process controllers that come in two varieties — one-shot and persistent. Alert-based arrays can scale up or down based on certain conditions (such as CPU usage, memory usage).

Tap In Control Plan Editor is an automation tool using the Petri Net model, which is a math transform describing distributed systems — just like the cloud. At each Plan branch, there can be different conditions in which you can run scripts, which can be written in Ruby, Java or Groovy. The idea for our Control Plan was to perform a job that would scale up by launching more instances when there were video files in an Amazon S3 bucket. The console can be accessed on any of the instances within the fabric, and the fabrics can be woven together through instances of the Appistry Network Bridge. Console access requires a browser, an instance of Java, and Adobe Air.

The CloudIQ engine can launch tasks which will then be taken care of by the fabric workers. The CloudIQ Platform user interface divides a fabric into applications, services, packages and workers. Applications monitored are fabric processes, that use services, existing in packages, that are, in turn, attended to by workers. The fabric’s work output is homogeneous, as workers have identical processes running on them. The fabrics can be linked together to create dependencies among the workers’ discrete fabric processes. CloudIQ Storage is similar in concept to Amazon S3, and in a way competes with S3. Each instance of CloudIQ Storage can be in different locations but they all work together as one group and look like one virtual drive. Generally, CloudIQ files are synced with each other (for example the same files are located on each storage location). In the case of the Amazon Appistry images, the CloudIQ storage is built-in to the image which means by default the storage will disappear along with the instances, unless of course you change the default directories to Amazon Elastic Block Storage (EBS) volumes. This also means that storage is pre-allocated, and finite within the instance by default.

In our testing, we created a wrapper program to launch the ffmpeg video rendering application. We used the CloudIQ engine coded in such a way that if we launched the client multiple times it would distribute a task to another fabric worker. When the work was done, we copied the results over to a single EBS volume attached to the first instance. To access the files in the storage and control the storage process, we could use the ‘curl’ command to send http requests to do things like delete, deploy, get, put, stop and some other things. There are three different types of programs installed onto a fabric: a fabric application that’s a batch processing application or computing application, a service such as Tomcat, Weblogic, or Apache, or a package such as Java Development Kit (JDK), Ruby, RPM, or command line installation like “yum install”.

Appistry is a sophisticated construction set for distributed cloud computing, but generally for more persistent applications. Its monitoring and reporting infrastructure relies on mostly external tools, when compared to the instance monioring capabilities of RightScale and Tap In Systems Control Plan. Appistry can use a variety of code that can be linked in with the Appistry APIs to produce a distributed system (or set of systems) if you’re adept at coding the project, and Appistry’s success is fully dependent on lots of custom coding. The results, however, could be very useful. But first, you need to get thru the 1,400 pages of documentation. Fortunately, paid customers get dedicated systems engineering help, and there’s available architectural support as well.

http://news.idg.no/cw/art.cfm?id=1104EE7C-1A64-6A71-CE78160045E84F52

Read more

Making Sense of Your iPad Options with New AT&T Data Plans

Posted on June 4, 2010December 30, 2021 by admini

You can still turn the 3G data connectivity on or off with a click or two from the iPad, making the 3G version of the iPad a more versatile option for business professionals that might need to get access to critical resources in a pinch when no wi-fi network is available.

The good news is that customers already subscribed to the unlimited data plan are grandfathered and can continue using the unlimited plan as long as they choose. You can use the AT&T Data Calculator to try and estimate the amount of data you expect to consume on a monthly basis and choose the plan that makes the most sense. Existing customers who have already been using the 3G connectivity of the iPad should be able to view the data usage history online, or at least get that information by contacting AT&T.

If you are a Sprint wireless customer and you have a Palm Pre or the new HTC EVO, then you already have in your hand a device capable of creating its own personal hotspot that can connect a handful of devices and share out the wireless connection. The device is $270 full price, but like a smartphone you can get it at a significantly lower, subsidized price if you are willing to accept a two-year contract. Whether you enter into a contract or not, the service is $60 per month for 5Gb of monthly data capacity from both carriers (although Sprint also includes unlimited data over 4G where that network is available).

There is an undocumented feature of Windows 7 that allows you to turn a laptop into a portable hotspot as well. However, when you get to the point where you are carrying your Windows 7 laptop so you can access the Internet from your iPad, I think you have crossed some sort of line in terms of practicality.

There have been leaks and rumors suggesting that the new iPhone OS will be capable of tethering. The fact that AT&T has dropped unlimited data, and added a new tethering option where they let you give them $20 a month for the privilege of having the option to tether–but without any additional data allocation–also implies that tethering will be coming soon.

If AT&T’s data is accurate, 65 percent of those users are consuming less than 200Mb per month, and 98 percent are consuming less than 2Gb.

The megabytes can add up quickly, so IT administrators need to be diligent when examining the data needs of mobile users with iPads and consider carefully the available options for getting to the data.

http://www.pcworld.com/businesscenter/article/197999/making_sense_of_your_ipad_options_with_new_atandt_data_plans.html

Read more

UK’s Times sold 5,000, FT shifted 130,000, WSJ 10,000 subs

Posted on June 4, 2010December 30, 2021 by admini

Not bad for a few days’ work, and could be a relatively nice earner…

But whether significant numbers of iPad users will renew the £9.99 subscription each month, after that first-week flurry of app excitement, remains to be seen.

Also at D8, Murdoch said his Wall Street Journal app now has 10,000 customers, paying $17.29 a month or free to those already subscribed to the website/newspaper.

Yesterday, Financial Times product development manager Steven Pinches told a separate conference the FT has seen 130,000 downloads of its free-to-download iPad app since it was made available to the device in the U.S. two weeks back (via Mobile Entertainment).

What we don’t know – whether the app is actually enticing iPad users to subscribe to the FT for the first time. Unlike the Times, the FT’s app is free for two months thanks to a sponsorship deal, but will then offer access only to readers who pay the title’s platform-agnostic annual subscription.

The Guardian Eyewitness photography app, from our parent company Guardian News & Media, has seen 90,000 downloads since iPad’s US launch, free under a Canon sponsorship.

http://www.guardian.co.uk/media/pda/2010/jun/03/ipad-newspapers

Read more

Mayor Bloomberg replaces index cards with an iPad

Posted on June 4, 2010December 30, 2021 by admini

No matter what you think of Apple, you have to agree that the iPad, just like the iPhone is changing the world. Yesterday, Mayor Bloomberg used his iPad for a speech instead of using index cards. http://www.slipperybrick.com/2010/06/mayor-bloomberg-replaces-index-cards-with-an-ipad/

Read more

Cloud Market Share: 2 Percent, But Growing

Posted on June 4, 2010December 30, 2021 by admini

Clouds Are Customers, Not Competitors
Tier 1 Research tracks the market for third-party data center providers, a universe that includes hosting companies as well as data center developers who lease turn-key “wholesale” space. Although cloud computing is seen as a potential replacement for in-house server rooms and company-owned data centers, many cloud services require their own data center space, and lease it from colocation providers and data center operators. Piraino said cloud services are creating customers for data centers, not competition. Antonio Piraino”I believe the cloud computing growth helps the entire data center and Internet infrastructure market,” said Piraino (pictured at left).

Sean Hackett, the Research Director of CloudScape for The 451 Group, agreed that for most data center providers, cloud computing represents an opportunity, not a threat. “This will translate into increased demand for the stuff you sell,” Hackett told the audience of more than 200 data center professionals.”

Small businesses’ enthusiasm for cloud services may translate into lost customers for some providers offering shared hosting and dedicated servers.

Enterprise Adoption Growing Slowly
Hackett said enterprise use of cloud services shows a similar pattern to corporate adoption of other outsourced services.

The report predicts that enterprise adoption of “private cloud” services will accelerate over the next five years, and will boost business for third party data center outsourcing. While many cloud computing providers will occupy leased colocation or third-party data center space, some will shift their business to huge public clouds served out of massive data centers.

http://www.datacenterknowledge.com/archives/2010/06/04/cloud-market-share-2-percent-but-growing/

Read more

Keeping Cloud Costs Grounded

Posted on June 4, 2010December 30, 2021 by admini

As Andy Mulholland, global CTO at Capgemini has said, “Relatively speaking, [cloud computing] is unstoppable… The question is whether you’ll crash into it or migrate into it.”

Similar to implementing any new technology, understanding the key business needs and the technology’s role in supporting it are critical before leveraging the cloud. Unexpected high costs due to poor planning can negatively impact take-up of further cloud initiatives, so companies need to put in the appropriate upfront time conducting research. Without across-the-board involvement, the cloud could end up costing more than you think.

Identify The Right Type Of Cloud
Every cloud service and cloud architecture has different capabilities, so it’s important to determine which ones best meet your business objectives. An organization with a large internal IT estate may wish to repurpose some of this to create a private infrastructure cloud–a sound way to increase utilization of existing assets and consider the internal economics of providing IT as a service. At the other end of the spectrum, an organization with a mobile workforce that makes heavy use of business applications may find that selecting public SaaS over in-house services offers improved productivity, as well as cost savings. Each business, no matter its size, will need to determine which cloud technologies will serve them best.

Pricing Models And Vendor Lock-In
The current lack of maturity in cloud standards and the rush to innovate and differentiate means that businesses will see a similar degree of lock-in with cloud platforms comparable to competing hardware vendors 20 years ago. Not that this is a bad thing–with fewer investments in physical IT assets, the cost of switching from one platform to another is much lower, but still significant. These considerations should be factored first into decisions about the type of cloud, then into decisions about the vendors (including internal resource) that will provide the cloud assets.

Cloud purists proclaim a gospel of “pay-by-use” or “utility pricing,” but the reality is that only the largest cloud providers can operate a service with fine-grained hourly billing at a realistic rate. Cloud providers must invest in IT resources, so they prefer the financial certainty of monthly or longer subscription terms. Even if a cloud project has a known duration, the resources needed by the project may be uncertain over that time and may vary on a daily or hourly basis. Where this degree of flexibility to scale resources is required, the price plan should support this or the promise of cloud scalability cannot be realized.

Varying charges for compute, storage, bandwidth and related services such as load balancing, make comparing competing offerings almost impossible. Idle hosting, where your application is deployed in the cloud but not running, and not accurately estimating bandwidth are two of the most prevalent unexpected costs.

http://www.forbes.com/2010/06/02/internet-software-zeus-technology-cloud-computing-10-garrett.html?boxes=Homepagechannels

Read more

Posts pagination

  • Previous
  • 1
  • …
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • …
  • 421
  • Next

Recent Posts

  • AI News – Mon, 17 Nov 2025
  • CSO News – Mon, 17 Nov 2025
  • AI/ML News – 2024-04-14
  • Incident Response and Security Operations -2024-04-14
  • CSO News – 2024-04-15

Archives

  • November 2025
  • April 2024
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • April 2023
  • March 2023
  • February 2022
  • January 2022
  • December 2021
  • September 2020
  • October 2019
  • August 2019
  • July 2019
  • December 2018
  • April 2018
  • December 2016
  • September 2016
  • August 2016
  • July 2016
  • April 2015
  • March 2015
  • August 2014
  • March 2014
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • October 2012
  • September 2012
  • August 2012
  • February 2012
  • October 2011
  • August 2011
  • June 2011
  • May 2011
  • April 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • June 2009
  • May 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006
  • December 2005
  • November 2005
  • October 2005
  • September 2005
  • August 2005
  • July 2005
  • June 2005
  • May 2005
  • April 2005
  • March 2005
  • February 2005
  • January 2005
  • December 2004
  • November 2004
  • October 2004
  • September 2004
  • August 2004
  • July 2004
  • June 2004
  • May 2004
  • April 2004
  • March 2004
  • February 2004
  • January 2004
  • December 2003
  • November 2003
  • October 2003
  • September 2003

Categories

  • AI-ML
  • Augment / Virtual Reality
  • Blogging
  • Cloud
  • DR/Crisis Response/Crisis Management
  • Editorial
  • Financial
  • Make You Smile
  • Malware
  • Mobility
  • Motor Industry
  • News
  • OTT Video
  • Pending Review
  • Personal
  • Product
  • Regulations
  • Secure
  • Security Industry News
  • Security Operations
  • Statistics
  • Threat Intel
  • Trends
  • Uncategorized
  • Warnings
  • WebSite News
  • Zero Trust

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
© 2025 CyberSecurity Institute | Powered by Superbs Personal Blog theme