Opinions and views from expert CIOZone members.
Posted by Bill Gerneglia in Untagged
A recent data breach investigations report from Verizon shows that small businesses continue to be the most victimized of all companies. Is this because there are many more smaller businesses than larger ones - and the larger ones have more resources and layered security mechanisms to combat cyber attacks? The answer depends on a couple of factors.
Of the 621 confirmed data breach incidents Verizon recorded in 2012, close to half occurred at companies with fewer than 1,000 employees, including 193 incidents at organizations with fewer than 100 workers.
A different recent report from Symantec has confirmed this trend. The report discovered that cyber attacks on small businesses with fewer than 250 employees increased 31% in 2012, after growing by 18% in the prior year. It's a pattern that many security analysts have noticed for several years now. Larger corporations have more resources and CISOs that are capable of investing heavily in sophisticated security strategies. That's forced cyber criminals to look for other ways to direct their initial attacks.
Cyber criminals today have become efficient in using smaller businesses to initiate their attack as a way of working upstream to a larger organization that may purchase software, hardware, or services from the smaller organization. These smaller suppliers or partners of large companies offer an indirect path into a major corporation's network.
Another tactic some more patient cyber criminals are using is targeting small companies in growth industries, such as health care or manufacturing. The cyber criminals plant their backdoor in the smaller organizations software in hope that their targets could be acquired by a larger corporation at some point down the road. Meanwhile, they lie in wait -- if and when the company merges or is acquired, they gain access to breach the system of the larger parent company.
Despite the statistics, too many small businesses think they're invulnerable. Some believe their small business would be a boring target for hackers.
Small businesses can't afford to remain complacent or ignorant about the risk of being a cyberattack target as they store valuable information for hackers, like customers' credit card numbers, code repositories, and intellectual property.
The most common tactics cyberattackers use against small businesses include "ransomware" scams that lock computers and demand a ransom fee. Attackers also use malicious software designed to steal information from employees' mobile devices and malware that uses a small businesses' website as bait to gain access to a larger company's database.
To combat the siege small businesses should deploy basic tactics such as
using good passwords, maintain the latest versions of antivirus software, and keeping essential business services off direct access to the Internet.
There are many challenges facing developers and testers in today’s mobile world. Their products must work smoothly across as many as 1,800 different device and platform combinations, as well as predict what device manufacturers will come up with next – quite a challenge for any organization.
As mobile demand continues to accelerate, developers and testers struggle to keep pace when it comes to mobile quality, performance, and development. Last year year, 708 million people used smartphones, and the number continues to grow. Maybe you’re even reading this article on one of the 118.9 million consumer tables that were sold last year. There will soon be more mobile connected devices on Earth than there are people to use them and Bring Your Own Device (BYOD) continues to raise the stakes higher still.
Smartphones don’t simply carry the news – they make it. So interchangeable are these devices – at least on the surface – that the technology experts now focus on what each new device doesn’t have and cannot do, rather than wowing us with what it can do.
The competition between device vendors means more people than ever have inexpensive access
to powerful mobile computing platforms. For consumers, this means choice. For organizations building mobile applications, this means added complexity. In a series of three posts, I will offer advice to ensure that you come out on top in the evolving mobile landscape.
Get With The Program
First up: you have to understand why the demand for application testing exists. Companies must increasingly engage with their customers through mobile apps. When executed properly, a mobile app will work on many levels. In fact, worldwide app revenue hit more than $15bn at the end of last year. With nearly half of smartphone users downloading at least one new app every week, that’s a lot to add to the bottom line.
The key phrase here is ‘executed properly.’ Currently, developers must write, develop, test and release apps that work for 130 different Android devices running
on seven different platforms and
two firmware sets that need thorough testing to ensure business functionality. Some device functions vary on different carrier networks – and even the movement of the device itself can affect the application’s behavior. Get it wrong and you’ll fall into the ‘worst business apps’ category. When end users have the power to determine the fate of an application with ratings and reviews, there aren’t any second chances to make a good impression.
Mobile vs Traditional Automation
Now that you understand the “why,” it’s time to move on to the “how.” Because of the maturity of the platforms and the historical build-up of the quality teams and infrastructures, traditional test automation typically leverages more ‘black box’ testing. Mobile test automation, while conceptually the same, requires different methods to accommodate the near-constant change to platforms and architectures. Mobile testers
need more access and insight into the application architecture, structure, and how it works, also known as ‘white box’ testing. To get truly flexible white box testing, a strong object recognition process must be in place. In other words, the devices need to provide more information about the platform than traditional applications on desktops ever had to and it all starts with object recognition.
Object recognition provides insight into items like buttons, controls, images, containers, and menu items. Because of the differences in platforms and devices, not all objects are created equal. It’s imperative that mobile testing technologies have a variety of ways to recognize objects under test. If a tester doesn’t know or doesn’t have access to objects within the app, it makes testing that app a long and drawn out process, which would defeat the purpose of test automation.
Native object recognition is the most robust method for understanding
the parameters of objects within the apps, but isn’t always available to all platforms. Testers need to make sure that mobile test automation tools use a combination of object recognition technologies such as native, image-based recognition, and test-based recognition. This combination will ensure that tests can be conducted on the widest variety of platforms with the least amount of maintenance across testing scripts.
On top of multimodal recognition technologies, an instrumentation strategy, a way to enhance native object recognition is beneficial. While frowned on in traditional testing, instrumentation is recognized as the norm within mobile testing and is driving future thinking. So the future of mobile testing is
to embrace white box testing – and to be less prescriptive about object recognition to maximize the benefits.
So there you have it: the why and how of application testing. Watch out for my next post in which I’ll explain how to maintain your testing script without breaking the bank.
- Archie Roboostoff, Director, Borland Portfolio at Micro Focus
Posted by Bill Gerneglia in Untagged
Web based attacks are on the rise as cyber criminals and others who do harm to computer systems for profit or malice prey on the Web’s areas of vulnerability, and businesses are feeling the effects of the attacks on their resources. Currently the weakest link is the Web browser. Vulnerabilities in browser add-ons like Java, Flash and Adobe represent a common source of network incursions and endpoint infections. To mitigate these significant business risks many IT security experts suggest a properly layered defense with effective endpoint and Web security and monitoring needs to be in place.
A recent survey was conducted by Webroot to assess the state of the Web security layer in organizations throughout the United States and the United Kingdom. The study focused on companies that currently have a Web security solution or plan to deploy one in 2013. The survey confirmed that Web-borne attacks are impacting businesses, with the majority of them reporting significant impacts in the form of increased help desk time, reduced employee productivity and disruption of business activities.
The key findings from the survey are listed below:
• 8 in 10 companies experienced one or more kinds of Web-borne attacks in 2012.
• 88% of Web security administrators say Web browsing is a serious malware risk to their firm.
• Phishing is the most prevalent Web-borne attack, affecting 55% of companies.
• Web security administrators report that Web-borne attacks have a significant negative impact on
help desk time, IT resources, employee productivity and the security of customer data.
• Companies that deploy a Web security solution are far less likely to be victims of password hacking,
SQL injection attacks, social engineering attacks and Web site compromises
The complete survey results may be downloadedhere.
I’ve always said that discussions between business and IT people at the planning stage of IT projects are critical to success. However, the fundamental difficulty is that IT and business people do not understand each other well enough – they do not speak the same language. Let’s consider the business implications and suggest how the two parties can bridge the gap.
‘Discussion’ is a conversation: two or more parties exchange views and ideas, arguing about them, trying to reach common ground and a conclusion. The absence of a common language means that there are parallel monologues rather than an exchange. There is no communication and hence no common ground or conclusion. This is obvious in the case of natural languages, but equally a problem with specialist vocabularies, such as IT’s.
If the business and IT people do not communicate, the business may view the IT organization as unresponsive, slow and expensive, holding the business back while being a drain on resources. On the other hand, IT comes to see the business as overly demanding, with unrealistic expectations – especially because, in IT’s opinion, it is too tight with money.
These problems are not universal but occur frequently enough to be a source of concern.
And the gap in understanding can and does have serious consequences. Projects significantly underachieve and ill-considered, large-scale projects start for no good reason.
The following case from the public sector illustrates my point.
The British politician Jack Straw was Home Secretary for a time during the UK’s 1997-2010 Labour governments. When he took over the Home Office, he encountered an IT project experiencing major problems with highly visible external consequences.
In his memoirs (Last Man Standing, MacMillan, 2012, p 295), Mr. Straw quotes the chairman of the Public Accounts Committee (a parliamentary watchdog) as saying that in this case and in many others there had been a “horrible interface” between civil servants who “understand all there is to know about, for example, the National Insurance system but know little of how a computer works, and the technicians who know just the reverse. They don’t spend enough time at the start of a project explaining where they are both coming from.”
The private sector is equally guilty, but much better at hiding the consequences.
So what can be done?
Like many real problems, there is no quick fix available, but that does not mean that we can’t do anything. It is essential to recognize that the problem of mutual miscommunication based on misunderstanding is a real one. Each side can then make determined efforts to find out more about the other. For example, the business should come to view IT as a significant player in the enterprise, ideally with board-level representation.
One clear starting point for resolving the issue is engaging an advisory service, which brings together representatives from the business and IT to review the current state of the IT environment and the desired future state to meet evolving business needs.
In these mediated discussions, both business and IT groups often realize where and why each views thing differently, reach a common understanding based on reasonable accommodations, and improve future communication as a result.
So while IT and business people may never speak exactly the same language, they can definitely work to create a common set of assumptions. That common vocabulary can enable each side to consistently understand the other’s position and achieve mutually satisfactory and beneficial business outcomes from strategically important IT projects.
-Peter Bye, IT Consultant with Unisys
Multiple choice: Exfiltrate is . . .
1. What’s left over after your filter removes harmful substances. “Harry, I think you better change the water filter. The exfiltrate looks cloudy.”
2. Removal of a layer of carbon atoms. “Professor, was it hard for those Nobel Prize winners to exfiltrate the graphite to get graphene?”
3. To leave a locale to escape prosecution. “Rocky, me and the boys gotta exfiltrate ‘cause things are heating up here.”
4. Someone wanted for crimes in another country. “Canadian officials arrested three exfiltrates and returned them to the United States.”
5. To observe from the outside. “We couldn’t infiltrate the organization, but with modern surveillance equipment we can exfiltrate them.”
6. To obscure classified information. “We finally got a letter from Johnny, but it was so heavily exfiltrated that there weren’t any complete sentences.”
If you selected (1) through (6), you’re just guessing. Although there are other definitions of exfiltrate—some similar to the foils above—the one we’re interested in today is “(7) to take sensitive data out of a victim’s environment.”
“Exfiltration. Confidential data is sent back to the hacker team either in the clear (by Web mail, for example), wrapped in encrypted packets or zipped files with password protection.”
Exfiltration can be done at the hands of an insider—perhaps a dishonest employee or a well-meaning but poorly trained worker tricked into sending data that should remain confidential—but for this article I’d like to concentrate on exfiltration resulting from outside attacks.
Well-organized criminal enterprises store large amounts of captured data on exfiltration servers, where it is picked up later by retrieval agents. These servers could be systems outside the targeted enterprise, accumulating the smaller packets, or they could be compromised servers within the victim’s enterprise. The data could be escaping disguised as email, PDF files, .doc, .xls, CAD, graphics files or other common types with hidden payloads.
How does this happen?
It starts with attackers getting into a company’s systems. Stolen login credentials and spyware such as keystroke loggers are two of the most common incursion methods.
Next, the attacker or software operating on his behalf sniffs around electronically to locate valuable confidential data. Then, the information is typically copied to exfiltration servers. The final step is retrieval of the information by the attackers, who are now poised to sell it to the highest bidder or to use it themselves.
What can you do about it?
Start by taking steps to prevent the initial incursion. For example,
• Establish and enforce policies for secure storage and handling of confidential data.
• Train your staff on basic security procedures.
• Use hacker frustration features and other techniques offered by many software providers to make it harder for an attacker to gain system access.
• Deploy malware prevention software on your Internet-facing systems.
Also make sure your important data is protected. For example,
• Use Guard Files and Automatic Content Recognition tools to restrict access to the data.
• Encrypt the data so that if it is exfiltrated without the encryption keys, the attackers will not be able to read it.
Then, monitor for suspicious activity and be prepared to take quick action. For example,
• Monitor outbound traffic. You don’t have any clients in the Far East, but you see data periodically going there from your networks? Hmm.
• Identify suspicious data on your servers. What about those RAR files (an archive format developed in Russia and popular there) that appeared a couple of weeks ago and seem to be growing in size? Hmm, again.
• Identify suspicious system behavior. Network traffic peaked at 3:00 a.m., when nothing significant is running on the system? Hmm, once again.
Of course, if the attackers have gotten into your servers, you might figure that the game is up and you’ve lost all your proprietary information. But that’s not always the case. Sometimes attackers don’t immediately find and exfiltrate the data they want. Verizon’s 2012 Data Breach Investigations Report
includes this somewhat encouraging statistic:
“In over 40% of incidents we investigated, it took attackers a day or more to locate and exfiltrate data. This gives some hope that reasonable time exists for more than one shot at detecting/stopping the incident before data is completely removed from a victim’s control.”
Furthermore, the types of attacks known as Advanced Persistent Threat involve malware agents that remain on your servers and continue to steal your data over a period of weeks, months, and longer.
So even though competitors halfway around the world might be looking over your plans for the Model One Outboard Turfenfoil today, you still can thwart their attempts to learn about the Model Two if you apply diligent security measures that prevent further exfiltration.
-Glen Newton, Security Architect, Unisys
Posted by PeterBye in Unisys, IT
The success of IT projects is commonly measured by three metrics: did the project deliver on time, within budget and do what it was asked to do? Although there is some controversy over the scale of the problem, there is little doubt that significant underachievement in one or more of the three metrics wastes vast amounts of money. A glance at some of the high profile examples reported in the press shows just how much.
As might be expected, much effort is spent in trying to identify the root causes of partial or complete failure. It appears that there is no single dominant cause. Factors such as unclear and changing requirements, lack of management commitment, selection of the wrong technology and plain incompetence are variously thought to be the culprits.
However, there is one factor, IT procurement, which may lie behind a number of apparently different causes of problems. Procurement processes include formal procedures to be followed in procuring products and services. Other factors, to do with attitudes and ways of doing things, also influence procurement decisions.
I suggest that although procurement processes are almost always established with the best of intentions – primarily getting value for money – they may perversely achieve the opposite. In particular, I believe they often have negative results because they obstruct essential discussion at project inception, and so compromise the entire project. I’ll try to explain why.
IT projects today can be quite complex, involving new developments and integration with existing systems, both within an organisation and externally. The requirements may not be clear. This is not necessarily anyone’s fault; there may just be uncertainty about what can be done with the technology and budget available.
All this points to a need for extensive discussion early on, before procurement decisions are finally made. People who understand the business requirements and the technology available have to get together to decide the best possible approach. Small proof-of-concept projects may be required to test ideas and gather information, for example to explore what can be done with new technologies.
Procurement processes can make these discussions next to impossible. Invitations to tender may require sealed bids. Questions may have to be posed in writing or in vendor conferences, with the answers made available to all bidders. All this is done in the interest of fairness, to avoid bias.
The result is that procurement decisions may be made without a basis of adequate understanding. In an effort to win business in difficult times, vendors keep prices low in spite of the uncertainty, with a hope of renegotiation later. The result is all too likely to be significant underachievement in one or more of our metrics.
It need not be like this. Here are some suggestions to improve the situation.
First, the need for upfront discussion and experiment is critical. It could at least in part be built into the procurement process, before a contract is awarded, perhaps paying the costs of losing bidders. This is expensive but could ultimately save costs by increasing the success rates of projects. The approach is in fact adopted in some cases, for example for large defence projects.
A second option is to allow greater flexibility at the start of a project, so that the necessary discussion and proof-of-concept projects can take place. An initial fixed price impossible, but subsequent implementation projects could be done at a fixed price. Again, the likelihood of greater success rates, with fewer overruns of cost and time, would make it worthwhile.
But perhaps the fundamental difficulty is that IT and business people do not understand each other well enough – they do not speak the same language. That’s a subject for another time!
-Peter Bye, IT Consultant with Unisys