Thursday, April 26, 2012

VMWare gets jacked...


Today, Virtualization and cloud computing is becoming extremely popular. This article on darkreading.com shows a quote from Eric Chiu, founder and president of a virtualization security firm by the name of HyTrust, stating that “Virtualization is mainstream and over 50 percent of enterprise datacenters are now virtualized” because of this growing usage of virtual machines, it's a growing target for attack from malicious users.

Theft of a portion of VMWare's ESX Hypervisor product is a big deal. VMWare apparently did not offer any clues to how or when the breach occurred, but a hacker has taken credit for the theft, and posted one file worth of source code for public viewing. VMWare officials, the article explains, say that the code is legitimate, but from inspection of the code and the developer comments, they say it dates back to 2003-2004.

VMWare claims that customers should not be concerned about any risks brought about by this theft and broadcast of the code. They stand firm with their philosophy of not using security through obfuscation. Which is to say, keeping source code publicly shared among certain industry partners in order to increase the number of eyes and brains working on making the code as secure as possible.

"VMware proactively shares its source code and interfaces with other industry participants to enable the broad virtualization ecosystem today. We take customer security seriously and have engaged internal and external resources, including our VMware Security Response Center, to thoroughly investigate. We will continue to provide updates to the VMware community if and when additional information is available."

I believe, and have said it many times before, that this is crucial to circumventing problems that present themselves when breaches do happen. This similar situation happened with Symantec's PC Anywhere suite, and they suffered severe public image damage, customer loss, market share damage, and brand loyalty loss due to the vulnerabilities that ensued after a similar breech and broadcast of some of their source code.  Symantec operated on a security through obfuscation, and treated source code as top secret, assuming that if you can't see the code, you can't take advantage of the not-so-ideal coding practices that cause vulnerabilities.

The article explains, and I agree, that it just goes to show you that even the most prepared companies, with balls-to-the-wall security and non disclosure implementations, can still be victim to this type of breach in security. Whether it was due to a great-wall attack, middleman, or simply a social engineering hijack, we do not know, but does it really matter?

Furthermore, this announcement by VMWare bolsters the argument for open source products. A good idea in this day and age of software, in my opinion, is to get as many brains to look at your product's code to increase the angle of perspective, and increase chances of finding flaws and vulnerabilities before they become a real threat to customers.

Facebook and Possible Solutions...


An article from Cnn.com explains the changes Facebook is making to it's Statement of Rights and Responsibilities. It was originally assumed that the changes are being made with the intention of decreasing users' paranoia about how the information and data on their page, as well as their social connections, is being used.

Although these changes and their intentions were explicitly mentioned to benefit the end-user, it just so happens that there are essentially zero changes taking place in how Facebook actually uses and collects data. It is now clear that Facebook only announced it was a change to the “Statement of Rights and Responsibilities” that was occurring, as opposed to a change to the Privacy policy, which is where the terms of data collection and usage is contained.

Many people are concerned with Facebook and how the data could be used, but you must remember that Facebook is a free service, and where they make their money is advertising.  I'm not suggesting that there doesn't need to be any clear terms or opt-out as far as the data mining is concerned. I do believe that internet privacy is a big issue today, and that there needs to be a consensus among the big corporations AND users about how to deal with it. However, I do understand that Facebook does require to use your data for advertising as that is their main revenue source.

Despite Facebook's necessity to use your data in order to increase their bottom line, I believe that fundamental changes can be made in order to ensure a more solid policy of data usage. For example, as opposed to sending data and trends out to advertising companies, delegate the task of 'figuring out what ad you are more likely to click on' to the entity it will appear on.

For example, perhaps Facebook could simply have a list of “major categories” that percentages of their users belong to. Instead of broadcasting specifics about each and every user, give the advertising companies this information, so that they can have several different ads that target certain categories. Then, on the Facebook server, it is determined with random number generators and some algorithm, which advertisement will be shown based on which categories you belong to and which categories of advertisements are in this week's line-up.

I think a solution much like this could be extremely successful and also may instill more competition among advertisers. To me it's like a Pot-Luck dinner. If I am a 'Sports enthusiast' 'Computer Programmer' 'College Student' and 'Socially inclined' for example, there will be a random chance of advertisements that fit these categories being shown on pages I visit. The advertisers would only need to know about the statistics of how accurate the system is, which will in-turn be similar to 'prime-time' advertising. Say for example 70% of users that are “SPORT ENTHUSIASTS” click on 'sport's memorabilia and tickets' advertisements” This would result in competition in the “sport enthusiast” category of advertising thus increasing the prices of putting your advertisement into that week/month's line-up to show these advertisements.  Facebook would then have no reason to broadcast your personal data, only statistically analyze it PRIVATELY and make the findings public to advertisers rather than, for example, using tracking cookies.

Google Drive Incites Paranoia...


Google Drive is undergoing a lot of publicity lately, and it is not due to the amazing innovation in cloud storage, or even because you can get 5gb for free that sync's across all google enabled devices, not in the least bit. It's because the privacy policy is worded in such a way that your documents, content, files, pictures, and any media you that you store in the cloud are now 'owned' by Google.

Though, the policy doesn't explicitly say this, it can be interpreted that way. This article from Cnn.com explains that Google Drive's terms of service states “Google reserves the right to 'use, host, store, reproduce, modify, create derivative works (such as those resulting from translations, adaptations or other changes we make so that your content works better with our Services), communicate, publish, publicly perform, publicly display and distribute” anything that you upload to the Google Drive. Furthermore the 'unified privacy policy' of Google explicitly states that it won't use your content for anything other than “provide, maintain, protect and improve [its services], [and] to develop new ones”

This implies that anything you post, as the article explains, such as a picture could end up on a promotional advertisement. Now your face is blasted across the web, providing revenue to google, with no royalties needed, because the legal terms explicitly state that anything you upload to the Google Drive is subject to this type of use, without additional consent from the user.

In addition to this bypass of consent based usage of your files to 'promote' a Google service, people are worried about just how public this material is. If you consider the Megaupload scandal, the article explains that it's possible for your content on the cloud to be subject to a subpoena and taken off the cloud forever.

In retrospect, the new unified privacy policy that Google introduced months ago, explicitly said that any data gathered from your activity cannot be used cross-service, so it begs the question of – Where is the line drawn for 'cross-service'? Google stated that data gathered from e-mail content within G-Mail wouldn't be used for other services, for example advertising on the Search engine, but it can be used for advertisements in the actual E-Mail interface?

At first I thought the new privacy policy implemented by Google was fairly transparent, but in hindsight the transparency was a farce. Vague interpretations of the words Service, Product and Promotion suggests the need for more intense research and awareness among users, rather than taking the policy at face value.


Wednesday, April 25, 2012

CISPA...


The Stop Online Piracy Act as well as the Protect Intellectual Property Act, SOPA and PIPA, were all the rage over the past few months. These bills were trying to be passed through congress. These two bills and the legislation behind them caused a pandemic of publicity and outrage. There were huge companies behind the support of denying the bill. Google petitions that even I took part in.

Although the bills were unsuccessful in congress, there is another bill in the works that a lot of people don't realize is even worse. It's called the Cyber Intelligence Sharing and Protection Act, CISPA for short.

The Reason why this bill is even worse than SOPA or PIPA is the vague wording in the bill.

Theft of misappropriation of private or government information, intellectual property or personally identifiable material.

This is far too broad of a statement! Protecting from the 'misappropriation'? of intellectual property? So basically if your web site challenges an idea, like Face-book or the government for example and all of the sudden you are there competition, they can take legal action against you and your website. The wording of the bill is vague enough to say that if you are a large corporation, you're on the government's side and are 'protected' under the bill. It is a violation of first amendment rights to freedom of speech, press, and religion.

The difference beteween CISPA and the other bills is that the same big corporations and companies that were against SOPA and PIPA are now in favor of CISPA. The reason for this is explained in this article on PCWorld.com, of course several other sources have been trying to raise awareness on CISPA.

The “Broader information sharing between business and government” is basically a new revenue stream for big information gathering companies like Google and Facebook. This unhindered, and government supported invasion of all privacy is scary to me. Where do we draw the line?  It's all a big rat race for these companies and the money is the driving force for the violation of all of our civil rights, even if it does happen to be on the internet.

Stringrays and feds...


Where is the line drawn for your fourth amendment right? You know the one, it states:

The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

Almost everyone has a cell phone these days. A random study I stumbled upon states that even many US third graders (8 years old) have cell phones.

With the massive acceptance and usage of cell phones, would you expect some privacy on those lines? Wouldn't you suppose that a cell phone and a conversation had on one, would be private, aside from maybe a warrant based on probable cause being involved?

Apparently the federal court does not agree. Within the last year a device known as a “Stingray” has been getting a lot of publicity. The way the device works is that it pretends to be a cell phone tower, intercepting signals and initiating handshakes to mobile devices at it's own accord. The device does not actually disrupt but acts as a middle man to cell phone traffic. The government can use these devices to monitor any and all cell phone data and conversations.

This article states that the government insists that stingrays do not infringe on the Fourth Amendment. Their claim is that most people do not have the expectation of privacy so therefore they do not need to acknowledge it as personal property or privacy.

In this day and age of wireless fun, how can you not expect some level of privacy? Literally everyone uses them and for the government legislators to entirely disregard the fourth amendment right is insane to me.

Saturday, April 14, 2012

Worms in Apples...


Apple's operating systems used to be renowned by loyal users for their lack of viruses, a major argument bullet point in many heated Mac vs Linux vs Windows debates. Even many Linux operating systems are (mistakenly) said to have very few viruses compared to windows. The fact of the matter is, the only reason why there seems to be less malware for these operating systems is because they used to not be as widely used as they are today. Therefore, why would a hacker spend his time and skill on targeting a community that is but a fraction of another? This does not mean that they do not exist. Over the past few years Mac's have become more popular to average computer users and media/software developers alike, which in turn means more attacks directed at Mac will only be natural.

Recently in the news, as you may have heard, a trojan by the name of Flashback has infected up to 600,000 users or more. The program works in a way that seems unheard of in this day and age of user access controls and authentication based security. Normally one would need to click on a bad link, download and run an infected program, or hit “allow” on something you have no clue about. Not the flashback. Apparently it exploits a loophole in the Java Automatic Updates to download the malware automatically. Another method of infection is a 'spoofed' Adobe Update popup.

A quote from a NY Times online article says “Several security experts have criticized Apple as slow to react, considering that Oracle issued a fix to the Java security hole in February. Apple did not issue a fix until more than a month later.”

As another quote from the article agrees with my thoughts on this, apple and it's users were so confident that their system was tightly secured, that there was a prominent lack of anti-virus, anti-malware, and other protection. This fact alone makes mac users an easy target for hackers, and also gives malicious hackers several zero-day exploits to use. Since there was no security “risks” to patch before, I can't imagine how many vulnerabilities are available to exploit. In the defense of Windows, as I am a Microsoft lackey, we have been faced with a never-ending bombardment of malicious software being thrown at us, which effectively increases the response time, and overall security and solidity of the operating system. I won't write apple off but they have a large curve to overcome as far as getting with the times in the never-ending battle against maliciousness.

In this day and age, a software company that is lulled into a state of security merely due to the fact that they have never been targeted is a big mistake.  It is akin to always leaving your the front door of your house wide open because you simply have never been robbed.  I feel as though it is a good thing though, as far as apple users being attacked.  Fool me once, shame on you, fool me twice shame on me.  If anything, it will start getting apple developers to put up their guards, and give anti-virus and anti-malware software designers incentives to develop more hearty, paid services since their will be a growing market for it.  The security blanket is gone and people will realize that they do in fact need to practice safe computing and proper protection of their Mac computer.

Philosophy of Security...


When talking about security on the internet and within computer systems and networks, people always say to assume the worst. There is always that remote possibility, so as a security engineer, you cannot just ignore different types of threats simply because they are very low likelihoods.

This article by Kelly Jackson Higginson DarkReading.com explains that the likelihood of a malicious intruder is actually a lot higher than you or I would assume.

The RSA Conference is a seminar held in San Francisco that focuses on cryptography, and progress in the field of internet security. Kelly Jackson reports that one of the most interesting new tools in system security is a device that monitors behind the line intrusions. A tool like this is similar to what is used by Cliff Stoll in the tracking and apprehension of hacker Markus Hess on the Berkeley networks in 1986.

When talking about security in a commercial setting, most talking points are first defense security. Firewalls, Traffic Limiting and monitoring, Strong passwords, Trustworthy and reliable users, and things of that nature. This tool however, assumes the worst. As Darin Anderson, a U.S country manager for Norman Data Defense Systems, is quoted saying “The dirty little secret in our industry is that everyone has been compromised,” and other prominent folks in the security industry agree. Security breaches are not a matter of if but when. This is a massive shift in security philosophy in my opinion, and a welcome one. It has been a priority to keep a system secure from external intrusion by unauthorized users, but I think it is just as important to have proper counter-measures in place for when your system does finally become compromised. No system is perfect. If there was a perfect security system there would be no need for any progress in the industry, however, with the quickly evolving technology market, there will always be bugs and holes in software and in thinking that need to be repaired.

The tool sits inside a network and is used to track suspicious activities of intruders. The article explains that this philosophical shift is attributed to the fact that most attacks have become highly sophisticated, as they are driven by desires for financial gain of hackers, so fiscal and attack success become tightly related to one another.

The saying goes, “Keep your friends close but keep your enemies closer.” I feel as though this was a shift in security attitude that needed to happen. You can't always rely on your system of intricate firewalls and protocols to keep you safe, as we all know that human error comes in to play with any sort of legislative protection. You cannot prevent someone from making a mistake, so having the proper counter-measures in place along with proper defenses may be just what this industry needs, even if it is simply a matter of deterrence and countering hacker incentive with a greater risk of detection.


Microsoft tanks botnet progress...


A botnet at its very basic elements is comprised of computers that are infected by malware, that then issue status updates and await commands from a command and control server somewhere in cyberspace. These commands could range from forwarding traffic for a hacker's anonymous browsing needs, to downloading more malware and executing code to initiate denial of service attacks.

Microsoft took down two of the command and control machines in the Zeus botnet on their own accord through their own personal federal filings and actions.

This article from Kelly Jackson Higginson DarkReading explains that Law Enforcement Agencies, Tech firms, and other Non-governmental organizations around the world work together and work towards tracking and disabling botnets.

Law enforcement across the globe is in outrage because of the lack of cooperation.  Apparently Microsoft took US Federal Court orders and made a move against the botnet control computers by effectively killing off two IP addresses. The concern is that Microsoft's actions have both harmed ongoing investigations in locating the source of the botnet masters, and damaged valuable trust among various entities involved in tracking and disabling botnets around the world.

After the debacle, Microsoft was coincidentally absent in a recent take-down of the Kelihos (Hlux.B, Kelihos.B) botnet. Their method of take-down? 'Poisoning' the P2P network with their own white-hat malicious code that essentially points infected machines to listen to a dummy control center, therefore sapping much of the power of the botnet.

“The Honeynet Project has led the industry in helping define proper botnet take-down procedures. Botnet take-downs are complicated and care must be taken not to overstep the legal or other boundaries, according to Honeynet officials. “

The question remains, how should this type of act legally be handled? Microsoft obviously has a metaphorical gun pointed at their head for their flippant maneuvers, but I believe that they could have been completely justified under the right circumstances. Yes, I agree that harming years of research and investigation is a fairly large mistake, however, if it were in person would you be penalized? What I mean is, if you saw someone who was a wanted criminal on the street (rapist, murderer, kidnapper), would it be wrong to turn them in or make a snap decision and attempt vigilante justice if it seemed like this was a once in a lifetime chance to stop another crime?

Friday, March 30, 2012

Loose lips sink..companies?...


When talking about security in a company, one cannot just assume it's all hardware and software based protections. It's about the people too. People are actually the weaker link when it comes to a companies security. Many security analyst firms prey on the weak links, and while your network may be locked up tight, someone with loose lips can easily make all of your efforts null and void.

Corporate espionage comes to mind. While at dinner, talking shop with your coworkers, you might be talking about sensitive information and not even consider the fact that you could be overheard or even consider that the information is fairly sensitive and with the right interpretation could cost your company big bucks. Also, you may be approached by a stranger striking up a conversation in a bar, mention your company, and before you know it you could be a target of bribery or extortion to get more information.

This article from darkreading.com reports on a study done by the firm FileTrek. The study was of 2625 Americans over the age of 18. By way of extrapolation the study suggests that over 90% of Americans suspect such actions are happening, whether intentionally malicious not.

Being a busy regional manager of a big-city branch of a company, you might think to just take some paperwork home, so that you can do a little catching up after you have dinner with the wife and kids. Well suppose you forget your briefcase at the dry-cleaner? Or you are robbed? Suppose that information you had could be used as insider trading and/or a way to take your company down. It all sounds very superstitious in nature, but it happens more often than you would think.

The article states that there is a difference in opinion among different generations when it comes to whether or not it is acceptable to take documents off the company premises. Only the majority of people 55 and older believed it was grounds for termination. Well the fact of the matter is, it is a completely termination warranting offense. Actually the article shows some statistics that the only other two crimes in the office-place that rank higher for grounds of termination are Sexual Harassment and Incompetence.

The great wall syndrome is very common in today's bustling market-place. The great wall syndrome is as easy copying sensitive company data to a USB Flash drive, and taking it home. Perhaps you lose the data, and now you are the cataclysm for your own early termination.   It's fairly straightforward to protect networks and design software correctly, but it is nearly impossible to control people and their actions.   Loose lips sink ships, and as far as a companies' security goes, one bad-egg spoils it all and completely undermines and bypasses any safe-guards that are currently in place.  I think it's an important task to get companies to start teaching their employees proper information etiquette and how sensitive data really is. Even the most benign piece of information can be interpreted in a way, in the right hands, to allow for further data compromise or corporate peril like bankruptcy, buy-out, and shutdown.

Pwn20wn Win!...


Earlier in my blog I mentioned the hacking contest named Pwn2Own. Well this article about Pwn2Own shows you just how easy and fast it is for focused minds to write code that can exploit a vulnerability. While the contest's main focus was on browsers, for example Internet Explorer, Google Chrome, Mozilla Firefox, it just goes to show you how important security should be for any software.

I find it rather amusing that the contestants found vulnerabilities, and programmed the exploit in as little as one hour. What is scary is that the target of the exploits were web browsers, major names in the industry, that almost everyone uses. It brings to light how important solid code conventions are, proper programming practices, and astute analysis of risks in all things software design. How is a product supposed to be the 'best' in the business if it has as many holes as Swiss-cheese? It's also fairly interesting, the article sort of suggests it, that software designers do not have the philosophy of security first. From what I understand is they merely wait until an exploit is made public before deciding to make patching that vulnerability a priority.

What I mean to say is software engineers need to have a intuition about their code. I feel as though there needs to be some kind of expertise involved, some shooting down of ideas because they pose a security threat, and also some more emphasis placed on solid code to prevent cheesy hacks from being possible. Companies are in my opinion too focused on being better, and improving on a product. When your product is currently full of holes, how is that not at the top of the queue? 'If it ain't broke don't fix it?' If the screws are loose, it's not broken yet, does that mean you don't need to tighten them up a bit and maybe use a little thread lock this time? I think not.

I feel like it's an perpetual cycle of crap upon crap. You can't build your house on a shoddy foundation so stop building your software additions on top of sub-par products. Make it a priority! It's impossible to fix all the bugs, and some bugs are only noticeable once they are exploited or brought to light, however most bugs are generally fairly obvious. My cynical assumption is that some software design teams will say “Oh I see how that could be a problem but nobody has done that yet so it's not really an issue” To me this is a huge mistake, incurs massive technical debt in a product, and ultimately will lead to more work in the future.

Also, as I have mentioned before, a belief that security through obfuscation is acceptable is a misguided and detrimental one. I am glad that Pwn2Own offered cash prizes for finding exploits, and I am glad that it brings the issue of design priorities to light.

"We created six different exploits in less than 24 hours, which demonstrates that with enough resources and expertise, a team of motivated researchers can write reliable exploits in a very short time,"

Imagine what, for example, a team backed with the budget of a nation state, a growing world power, could accomplish. To me it's scary, and not to be chicken-little, but we all need to start designing with security in mind, and not just an afterthought.