Friday, December 27, 2013

Top 10 Uses For A Message Queue


Ref: http://blog.iron.io/2012/12/top-10-uses-for-message-queue.html?spref=tw

We’ve been working with, building, and evangelising message queues for the last year, and it’s no secret that we think they’re awesome. We believe message queues are a vital component to any architecture or application, and here are ten reasons why:
  1. Decoupling It’s extremely difficult to predict, at the start of a project, what the future needs of the project will be. By introducing a layer in between processes, message queues create an implicit, data-based interface that both processes implement. This allows you to extend and modify these processes independently, by simply ensuring they adhere to the same interface requirements.
  2. Redundancy Sometimes processes fail when processing data. Unless that data is persisted, it’s lost forever. Queues mitigate this by persisting data until it has been fully processed. The put-get-delete paradigm, which many message queues use, requires a process to explicitly indicate that it has finished processing a message before the message is removed from the queue, ensuring your data is kept safe until you’re done with it.
  3. Scalability Because message queues decouple your processes, it’s easy to scale up the rate with which messages are added to the queue or processed; simply add another process. No code needs to be changed, no configurations need to be tweaked. Scaling is as simple as adding more power.
  4. Elasticity & Spikability When your application hits the front page of Hacker News, you’re going to see unusual levels of traffic. Your application needs to be able to keep functioning with this increased load, but the traffic is anomaly, not the standard; it’s wasteful to have enough resources on standby to handle these spikes. Message queues will allow beleaguered components to struggle through the increased load, instead of getting overloaded with requests and failing completely. Check out our Spikability blog post for more information about this.
  5. Resiliency When part of your architecture fails, it doesn’t need to take the entire system down with it. Message queues decouple processes, so if a process that is processing messages from the queue fails, messages can still be added to the queue to be processed when the system recovers. This ability to accept requests that will be retried or processed at a later date is often the difference between an inconvenienced customer and a frustrated customer.
  6. Delivery Guarantees
    The redundancy provided by message queues guarantees that a message will be processed eventually, so long as a process is reading the queue. On top of that, IronMQ provides an only-delivered-once guarantee. No matter how many processes are pulling data from the queue, each message will only be processed a single time. This is made possible because retrieving a message "reserves" that message, temporarily removing it from the queue. Unless the client specifically states that it's finished with that message, the message will be placed back on the queue to be processed after a configurable amount of time.
  7. Ordering Guarantees
    In a lot of situations, the order with which data is processed is important. Message queues are inherently ordered, and capable of providing guarantees that data will be processed in a specific order. IronMQ guarantees that messages will be processed using FIFO (first in, first out), so the order in which messages are placed on a queue is the order in which they'll be retrieved from it.
  8. Buffering In any non-trivial system, there are going to be components that require different processing times. For example, it takes less time to upload an image than it does to apply a filter to it. Message queues help these tasks operate at peak efficiency by offering a buffer layer--the process writing to the queue can write as fast as it’s able to, instead of being constrained by the readiness of the process reading from the queue. This buffer helps control and optimise the speed with which data flows through your system.
  9. Understanding Data Flow
    In a distributed system, getting an overall sense of how long user actions take to complete and why is a huge problem. Message queues, through the rate with which they are processed, help to easily identify under-performing processes or areas where the data flow is not optimal.
  10. Asynchronous Communication
    A lot of times, you don’t want to or need to process a message immediately. Message queues enable asynchronous processing, which allows you to put a message on the queue without processing it immediately. Queue up as many messages as you like, then process them at your leisure.
We believe these ten reasons make queues the best form of communication between processes or applications. We’ve spent a year building and learning from IronMQ, and our customers are doing amazing things with message queues. Queues are the key to the powerful, distributed applications that can leverage all the power that the cloud has to offer.

How To Successfully Build Team Confidence


Ref: http://java.dzone.com/articles/how-successfully-build-team

In software development, building a great team is a delicate yet achievable goal. Although there isn't an exact formula, most managers would come to a common consensus on the basics. The formula might look something like this: Varying Skill Sets + Seasoned Members + Balanced Personalities + Proper Mindset = Success! Many attempt to build teams with this or similar formulas but find that some teams still struggle to shine. This is generally due to a lack of team confidence. Although each member might be an all-star on their own, when placed on a team, the team itself has its own personality. A team with confidence issues is dangerous and typically results in disorganization, frustration, mistrust, unhealthy debate, buggy releases, and missed deadlines. The good news is there's a simple way to build team confidence.
Team confidence can be achieved through delegation. This is not in reference to the programming "delegation" design pattern. This concept has been around much longer than modern technology. Good old fashion delegation is "the partnership of authority ... to carry out specific activities." After reading this section, there might be some skeptics out there. That's okay; skepticism is good (check out Skepticism: A Developer's Sixth Sense). Let's break down the results of this simple, yet very effective, tool:
Acquiring New Skills
Sometimes building or attaining new skills will be necessary to delegate tasks. When this happens it's a win-win scenario. This helps team members try new things, advance within their job, and grow within their career. Adding skills can also increase the self esteem of team members.
Expanding Team Value
As team members achieve new skills and delegate work throughout the group, the overall team's value increases. Furthermore, this encourages the growth of tribal knowledge within a team. This helps limit the impact of team members who are promoted, moved to other teams, or leave a company.
Empowering Members
Delegation is an opportunity to empower others. Empowering team members provides an opportunity for team members to assume more responsibility. Additionally, empowerment is a sign of trust and confidence in a member's ability to manage a task on their own.
Building Trust
Trust is always an important attribute of any successful team. Trust cannot be taken; it must be given. Delegating different types of jobs to team members will require them to work with each other. This will naturally lend itself to opportunities where trust can be built between team members.
Allowing Skills To Shine
Some developers are natural public speakers (there are a few); others might be excellent organizers. Delegation provides an opportunity to display additional skills that might not be part of daily programming life. Providing this avenue can be invaluable for retaining seasoned team members.
Increasing Efficiency
Delegating work load will increase a team's efficiency. Sometimes this impact will be immediate, while other times (in the case of learning new skills) the team will receive benefits in the long-term. Additionally, as confidence grows, teams will seek out ways to become more efficient.

Although team confidence might be the goal of delegation, its results are far reaching. Delegating work is a guaranteed way to increase team satisfaction, properly motivate members, and help prevent team burnout. Sometimes the fast-paced nature of technology can blind people to only look forward for new ideas and approaches. Sometimes good ideas or approaches such as delegation already exist and simply need to be properly executed.

Friday, December 13, 2013

5 Factors that Contribute to Employee Disengagement

Ref: http://java.dzone.com/articles/5-factors-contribute-employee-0


It is a fact that some employees feel less excited or emotionally responsive toward their job. Employee disengagement occurs due to a number of reasons and it is important for managers to be aware of these factors to improve their employees’ attitude and ensure productivity. Here are some of the common reasons why people experience negative feelings about their work.
1. Misled about the nature of the job
One of the causes of employee disengagement is when people are being misled about the kind of job and position that are offered to them. For instance, they are given a different idea about their workload, job responsibilities and schedule, which was not what they were expecting during the interview process. Thus, transparency is important, so employees will not feel forced to work on a job that will not give them satisfaction. 
2. Incompetency for the job position 
There are instances that a person is hired for a job because there is a great need to fill a specific position. However, as time goes by, employees may feel misplaced and incompetent since they are placed in a position that is not suitable for them. When this happens, they will end up disliking their job and become more of a burden than an asset to the company. 
3. Poor training and harsh feedback
Companies that fail to provide proper training to new employees, can lead to employee disengagement. Moreover, lack of training will have a direct impact on the quality of work that employees provide, particularly when this is coupled by the absence of motivation and support from managers. Eventually, their disappointments and frustrations will cause them to quit and leave their job.
4. No opportunities for advancement
Every employee dreams of being promoted or advanced to their chosen career. Hence, when they look for a job, they prefer to be employed in a company that can provide them with a promising future. If the company does not offer such opportunities for advancement, employees are less likely to stay for a longer period, and they will not be quite productive or efficient at work.
5. Lack of recognition
Employees long for a sense of recognition or appreciation for every good thing that they do. If employees do not receive any incentives in spite of their efforts to help an organization achieve its goals, they will eventually decide to transfer to another job. With this in mind, a reward system should be set in place to encourage employees to do even better and stay longer in a company.
By being aware of their common reasons for employee disengagement, companies will be able to retain their competent employees and ensure the success of the entire organization on a long-term basis.

Thursday, November 14, 2013

Common methods used for internet censorship

Ref: http://www.techgig.com/tech-news/editors-pick/How-to-bypass-internet-censorship-20363



IP Blocking

This is the most basic method used to filter content. It involves blocking the IP address of the target website. Unfortunately,websites sharing the same IP address, which is usually the case on a shared hosting server, are also blocked. This was the method used by ISPs in the UK to block The Pirate BayWorkaround:you need is a proxy with access to the blocked site. There are numerous free proxies online. This article by Guy McDowell lists four sites that give you a free updated proxy list. The proxy server fetches the website for you and displays it on your browser. Your ISP only sees the IP address of the proxy and not the blocked website. Blocked websites can also beat this censorship method by adding a new IP address and letting users know about it. Users are then able to access the site without any problems.

DNS filtering and redirection
This is a much more sophisticated filtering methodthe Domain Name Server (DNS) fails to resolve the correct domain or returns an incorrect IP address. ISPs in many countries use this method to block illegal sites, for example, Denmark and Norway use DNS filtering to block child porn websites. China and Iran have also used this method numerous times in the past to block access to legitimate sites. Read Danny's article on how to change your DNS for more in-depth information.

Workaround: One way to circumvent this is to find a DNS that resolves the domain name correctly, for example, OpenDNS or Google Public DNS. To change your DNSyour ISP to OpenDNS or Google Public DNS, you must configure it in your operating system or device. Both have excellent tutorials fortypes of operating systems. You can also type the numeric IP address in your URL bar instead of the actual domain name though this is less effective especiallysites share IP addresses.

URL filtering
With URL filtering, the requested URL is scanned for targeted keywords irrespective of the actual domain name typed in the URL. Many popular content control software and filters use this method. Typical users include educational institutions, private companies and government offices.

Workaround: A highly technical method to circumvent this is to use escapeacters in the URL. However, it is much simpler to use encrypted protocols such as a Virtual Private Network (VPN) service or Tor. Once the data is encrypted, the filter cannot scan the URL and you can therefore access any website.

Packet filtering
This method is also known as static packet filtering. It is a firewall technique used to control network access. Incoming and outgoing data packets are monitored and either stopped or allowed through based on pre-determined rules such as source and destination IP addresses, keywords and ports. When used in internet censorship, TCP packet transmissions are terminated by the ISP when targeted keywords are detected.

Workaround: Again, VPN services and Tor are the best ways to get around packet filtering. Packets sent over VPN and Tor contain dual IP headers. Firewalls are only able to apply the filtering rules to the outer header but not the inner header when these data packets are transmitted.

Man-in-the-middle (MITM) attack
I have only heard of this method being used by some of the regimes I mentioned earlier. It is a common hacking method, but in January 2010, Chinese authorities successfully used a MITM attack to intercept and track traffic to Github.com. As the name implies, an MITM attack is based on impersonation,the eavesdropper makes independent connections with the victims and makes them believe they are communicating with one another.

Workaround: The best defense against MITM attacks is to use encrypted network connections, such as offered by HTTPS (what is HTTPS?) and VPN. HTTPS utilizes SSL capabilities in your browser to conceal your network trafficsnooping eyes. There are Chrome and Firefox extensions known as HTTPS Everywhere, that encrypts your communication on most major sites. When browsing on HTTPS, always take note of any browser warnings to the effect that a website's certificate is not trusted. This could indicate a potential MITM attack. VPN and Tor technology also uses SSL, which forces the attacker to obtain the key used to encrypt the traffic.

TCP connection resets/forged TCP resets
In this method, when a TCP connection is blocked by an existing filter,subsequent connection attempts are also blocked. It is also possible for other users or websites to be blocked, if network traffic is routed via the location of the block. TCP connection resets were originally used by hackers to create a DOS (Denial of Service) condition, but Internet censors in many countries are increasingly finding the technique useful to prevent access to specific sites. In late 2007, it was reported that Comcast used this method to disable peer-to-peer communication. The US FCC ordered Comcast to terminate the practice in August 2008.Workaround: The workaround for this mainly involves ignoring the reset packet transmitted by the firewall. Ignoring resets can be accomplished by applying simple firewall rules to your router, operating system or antivirus firewall. Configure your firewall to ignore the reset packet so that no further action or response is taken on that packet. You can take this a step further by examining the Time-to-live (TTL) values in the reset packets to establish if they are cominga censorship device. Internet users in China have successfully used this workaround to beat the Great Firewall of China.

Deep Packet Inspection (DPI)
Now this one is really scary. Under the wings of the PRISM project, the NSA used this method to eavesdrop and read private email communications. China and Iran use deep packet inspection for both eavesdropping and Internet censorship. DPI technology allows prying eyes to examine the data part of a packet to search for non-compliance against pre-determined criteria. These could be keywords, a targeted email address, IP address or a telephone number in the case of VoIP. While DPI was originally used to defend against spam, viruses and system intrusion, it is clearrecent developments that it is a now a weapon of choice for Internet censorship.

Workaround: To beat a Deep Packet Inspection, you need to connect to a remote server using a secure VPN link. The Tor Browser bundle is ideal to evade deep packet inspection because it conceals your location or usageanyone carrying out network surveillance or traffic analysis.


Friday, August 9, 2013

Remove PDF File password (Linux)

To remove the password from the PDF files , we can use script like below.


Shell File Name : removePassWord.sh

File Location: ~/bin/removePassWord.sh

----------------------------------------------CONTENT START----------------------------------------------------------
# Usage removePassWord.sh  
for f in $1/*.pdf
do
echo "Removing password for pdf file - $f"
pdftops -upw $2 "$f" "$f".ps
ps2pdf "$f".ps "$f"
        rm -rf "$f".ps 
done
----------------------------------------------CONTENT ENDS ----------------------------------------------------------

Thursday, August 8, 2013

Install WAS 8 beta from the 4 zip files downloaded via http

These are the steps I had followed to install WAS 8 beta with those 4 zip files.

1. Download the 4 zip files for WAS 8 beta:

was.repo.8000.developers.ilan_part1.zip
was.repo.8000.developers.ilan_part2.zip
was.repo.8000.developers.ilan_part3.zip
was.repo.8000.developers.ilan_part4.zip


2. Extract these zips all into a directory (e.g. ~/Installers/Websphere8/ ) . The resulting directory should have directories named disk1, disk2, disk3 and disk4 (in addition to other files, but this is just to clarify the directory structure)

3. Get Installation Manager for your OS (for example: in the http page, there is a file called IBM Installation Manager for Windows on Intel). Install Installation Manager onto your machine.

4. Once installed, open Installation Manager, go to File > Preferences... > select Repositories > click Add Repository... > add the directory where you had unzipped your files (e.g. ~/Installers/Websphere8/). Then click OK to add the repository and OK to quite the preferences.

5. Click on Install, and WAS 8 should be available for you to install.

Wednesday, July 31, 2013

virtualbox (CLONE) ubuntu waiting for network configuration

Ref: http://virtualboximages.com/%5BSolved%5D+Waiting+for+network+configuration.+Waiting+up+to+60+seconds+for+network+configuration

Waiting for network configuration. Waiting up to 60 seconds for network configuration
We have the solution for this error. This solution should work on any installation.
_______________________________________________________
1.) Login
2.) Enter the following command

~$ ifconfig -a

3.) Notice the first Ethernet adapter id. It should be "eth?"
4.) edit the file at /etc/network/interface. Change the adapter from "eth0" to the adapter id in the previous step
5.) Save the file
6.) Reboot the virtual appliance
_______________________________________________________

Friday, July 19, 2013

Agile Development leads to Alzheimer's

Ref:  http://java.dzone.com/articles/agile-development-leads

Iterative development and design helps you to reach your way towards understanding what the customer really needs, to try out new ideas, evaluate designs, experiment, respond to feedback and react to changing circumstances. Everything gets better as you learn more about the domain and about the customer and about the language and technologies that you are using. This is important early in development, and just as important later as the product matures and in maintenance where you are constantly tuning and fixing things and dealing with exceptions.
But there are downsides as well. Iterative development erodes code structure and quality. Michael Feathers, who has been studying different code bases over time, has found that changes made iteratively to code tend to bias more towards the existing structure of the code, that developers make more compromises working this way. Code modules that are changed often get bigger, fatter and harder to understand.
Working iteratively you will end up going over the same ground, constantly revisiting earlier decisions and designs, changing the same code over and over. You’re making progress – if change is progress – but it’s not linear and it’s not clean. You’re not proceeding in a clear direction to a “right answer” because there isn’t always a right answer. Sometimes you go backwards or in circles, trying out variants of old ideas and then rejecting them again, or just wandering around in a problem space trying stuff until something sticks. And then somebody new comes in who doesn’t understand or doesn’t like the design, tries something else, and leaves it for the next guy to pick up. Changes in design, false starts, dead ends and flip flops leave behind traces in the code. Even with constant and disciplined refactoring, the design won’t be as clean or as simple as it would be if you “got it right the first time”.

It doesn’t just wear down the code, it wears down the team too

Iterative development also has an erosive effect on an organization’s memory – on everyone’s understanding of the design and how the system works. For people who have been through too many changes in direction, shifts in priorities and back tracking, it’s difficult to remember what changed and when, what was decided and why, what design options where considered and why they were rejected before, what exceptions and edge cases came up that needed to be solved later, and what you need to know when you’re trying to troubleshoot a problem, fixing a bug or making another design change.
Over the course of the last 6 or more years we've changed some ideas and some parts of the code a dozen times, or even dozens of times, sometimes in small, subtle but important ways, and sometimes in fundamental ways. Names stay the same, but they don’t mean what they used to.
The accumulation of all of these decisions and changes in design and direction muddies things. Most people can keep track of the main stories, the well-used main paths through the system. But it’s easy for smart people who know the design and code well, to lose track of details, the exception paths and dependencies, the not-always-logical things that were done for one important customer just because 25 or 50 or 110 releases ago. It gets even more confusing when changes are rolled out incrementally, or turned on and off in A/B testing, so that the system behaves differently for different customers at different times.
People forget or misremember things, make wrong assumptions. It’s hard to troubleshoot the system, to understand when a problem was introduced and why, especially when you need to go back and recreate a problem that happened in the past. Or when you’re doing trend analysis and trying to understand why user behaviour changed over time – how exactly did the system work then? Testers miss bugs because they aren't clear about the impact of a change, and people report bugs – and sometimes even fix bugs – that aren't bugs, they've just forgotten what is supposed to happen in a specific case.
When making changes iteratively and incrementally, people focus mostly on the change that they are working on now, and they forget or don’t bother to consider the changes that have already been made. A developer thinks they know how things work because they’ve worked on this code before, but they forget or don’t know about an exception that was added in the past. A tester understands what needs to be tested based on what has just been changed, but can’t keep track of all of the compatibility and regression details that also need to be checked.
You end up depending a lot on your regression test suite to capture the correct understanding of how the system really works including the edge cases, and to catch oversights and regression mistakes when somebody makes a fix or a change. But this means that you have to depend on the people who wrote and maintained the tests and their understanding and their memory of how things work and what impact each change has had.

Iterative development comes with costs

It’s not just the constant pace, the feeling of being always-on, always facing a deadline that wears people down over time. It’s also the speed of change, the constant accumulation of small decisions, and reversing or altering those decisions over and over that wears down people’s understanding, that wears down the mental model that everyone holds of how the system works and how the details tie together. All of this affects people’s accuracy and efficiency, and their confidence.
I am not sure that there is a way to avoid this. Systems, teams, people all age, and like in real life, it’s natural that people will forget things. The more changes that you make, the more chances there are for you to forget something.
Writing things down isn't much of a help here. The details can all be found somewhere if you look: in revision history, in documentation, in the test suite and in the code. The problem is more with how people think the system works than it is with how the system actually works; with how much change people can keep up with, can hold in their heads, and how this affects the way they think and the way they work.
When you see people losing track of things, getting confused or making mistakes, you need to slow down, review and reset. Make sure that before people try to fix or change something they have a solid understanding of the larger design – that they are not just focusing on the specific problem they are trying to solve. Two heads are better than one in these cases. Pair people up: especially developers and testers together, to make sure that they have a consistent understanding of what a change involves. Design and code reviews too, to make sure that you’re not relying too much on one person’s memory and understanding. Just like in real life, as we get older, we need to lean on each other a bit more.

Sunday, June 30, 2013

When to leave the job(company)

Ref: http://java.dzone.com/articles/when-leave-your-programming

Technologists often rely on the more common and obvious signs to leave their employer (company product failures, layoffs, or reductions in pay/benefits) as primary motivators for making an exit. One could argue that experience at a failing company can be infinitely more valuable than time spent at a highly successful shop.  Waiting for those alarms to sound, which could be false alarms, is a mistake for your career.
When should you think about leaving your job?
  • You are clearly the ‘best’ programmer at the company and/or have no teacher or mentor available – Many people may get this wrong due to overconfidence, so you need to assess your skills honestly.  Even if you acknowledge you are not the best, do you have access to others that you can learn from that are both able and willing to share their knowledge with you?  Your company may have hired loads of great talent, but if these individuals are too busy to help or not interested in dialogue you are no better off than working alone.
  • The technologies employed are static and make your skills unmarketable – The extended use of dated, proprietary, or very specific technologies can kill your marketability.  If the firm is still using very early releases of popular languages or frameworks this could be a good indicator.  Multiple years in a stagnant tech environment is much worse than the same tenure in a shop that consistently improves their tools.
  • You have accomplished nothing – This is often not your fault.  Perhaps your company is consistently delaying releases and never seems to deliver on time, where the problem could lie in the development process or management decisions and not on tech talent.  If you look back on your stay with the company and can not point to any significant accomplishments (given a reasonable amount of time), consider the reasons why.
  • You are underpaid with no upside – There are at least a handful of justifications to accepting compensation below market rate.  The ability to work with great people is probably the #1 reason, with learning a valuable skill a close (and related) second.  If you took less money with the expectation of a future positive that just hasn’t panned out, it’s time to look at your options.
  • You are consistently passed over for interesting projects or promotion, and your ideas are not considered - If you are rarely given the plum assignments and not even a candidate for higher responsibility roles, the company simply doesn’t value your service.  The firm feels you are doing enough to keep your job, but they do not see you as a true long-term asset.  Keep track of how often you volunteer for a new venture and the company response.
  • You are no better off today than you were when you joined the company - The phrase ‘better off’ can take on a few meanings.  Traditionally one might use better off to refer to improved financial standing (raises), but you should add more qualified as well.  If your skills, marketability, and compensation have not improved after a reasonable amount of time, you need to question why you are still there.
  • You see little change in what you do – A consistent and small set of responsibilities for long durations tends to be a career killer.  Working on one small part of a very large project/product is usually the culprit.
  • You have no inspiration – Many domains in software development might not be all that interesting to you personally.  In those situations, the technical challenge at hand or the opportunity to do something truly innovative may trump the lack of industry interest.  Building a website for an insurance company might not be a dream, but scaling for millions of users could make it fulfilling.  If you are finding no value in the work you do and lack any inspiration, there is probably something out there that will get you excited.


Thursday, June 6, 2013

Linux RAM usage of process by name

Ref: http://abdussamad.com/archives/488-Memory-usage-of-a-process-under-Linux.html

To find the RAM usage of process by name, one can use command like below

---------------


ps -C  -O rss

--------------

For example to find memory usage of google chrome, use command like below.


ps -C httpd -O rss
Sample output would be like below...
PID   RSS S TTY          TIME COMMAND
3299 149484 S ?       00:01:57 /opt/google/chrome/chrome       
3310  7984 S ?        00:00:01 /opt/google/chrome/chrome       
3315 14564 S ?        00:00:00 /opt/google/chrome/chrome --type=zygote
3320 11112 S ?        00:00:00 /opt/google/chrome/chrome --type=zygote
3393 41984 S ?        00:00:00 /opt/google/chrome/chrome --type=renderer 

But to find cumulative/sum of RAM usage of all those process, we can have small shell script which do the job for you.


#!/bin/bash
ps -C $1 -O rss | gawk '{ count ++; sum += $2 }; END {count --; print "Number of processes =",count; print "Memory usage per process =",sum/1024/count, "MB"; print "Total memory usage =", sum/1024, "MB" ;};'
Save it as psmem.sh and run it like this:
[admin@serve3 ~]$ psmem httpd
Number of processes = 3
Memory usage per process = 9.83464 MB
Total memory usage = 29.5039 MB

Monday, June 3, 2013

Linux boot in Command Line Mode

To start fedora in command line mode we have to edit the /etc/inittab file as root.

-------------BEFORE CHANGE CONTENT-------
id:5:initdefault:
----------------------------------------------------------------



-------------AFTER CHANGE CONTENT---------

id:3:initdefault:
----------------------------------------------------------------


As guessed modifying the value back to 5 would boot in graphical mode.

Thursday, May 23, 2013

10 Tips for Programmers

Ref 1: http://java.dzone.com/articles/how-stand-out-work-10-tips

Ref 2: http://java.dzone.com/articles/how-stand-out-work-10-tips-0


1) Don't hesitate to ask questions

2) Search for a niche

3) Get familiar with the “Big Picture” of the application

4) Do what you need to do, not what you like to do

5) Share your knowledge and help your colleagues

6) Try to give realistic estimates

7) Test, test, test!

8) Remember that all of us make mistakes

9) Promote strengths of your company in social networks

10) Small tips and tricks



Tuesday, May 7, 2013

extract certain kind of unique text from each line file using perl


Please use below command:

cat FILE_PATH | perl -pe 's/.*\"([^"]*)\".*/$1/' |sort |uniq

Replace FILE_PATH  with file path where you would liek to do search

Friday, April 26, 2013

HTTP proxy wrapper on top of SOCKS proxy

There are situations where some application can interact or work only with http proxy but in corporate world you might need to use socks proxy to work with VPN set up.

To work around with, I found a good proxy client which will act as http proxy for socks dynamic port forwards.

To be simple

If you want to redirect all your http proxy traffic through your socks proxy, you can use privoxy (Can be installed with command like yum install privoxy)

privoxy looks into a file called config in the current directory to know where the socks proxy set it up.

General config will be something like this if you open dynamic port using command (ssh -D *:15998 corporateMachine@corporateNetwork.com)


             forward-socks5 / localhost:15998 .               

Now just issue command like below to run privoxy in background

Thats all, any http proxy call would redirect through the above socks proxy

For example, if you want your wget to go through socks proxy then just run below in your terminal and then issue wget


        export http_proxy='http://localhost:8118'           

Here you can not use any other port than 8118 :)




Friday, April 12, 2013

UML Links for Beginners

Ref: http://java.dzone.com/articles/uml-linksheet



 UML
UML Profiles
UML 2.5

Web Site Admin use Case


Friday, March 29, 2013

Citrix ICAClient Google Chrome

Ref: http://kenfallon.com/finally-able-to-open-citrix-from-chrome/



Edit the file /usr/share/applications/wfica.desktop include the following:

[Desktop Entry]
Name=Citrix ICA client
GenericName=Citrix ICA Client
Comment=Citrix nFuse session file
Categories=Application
Encoding=UTF-8
Exec=/opt/Citrix/ICAClient/wfica
Icon=wfica
Terminal=false
Type=Application
MimeType=application/x-ica
Edit the file /usr/share/mime/packages/ica.xml include the following:

    
    Citrix ICA launcher
    
  
And finally run the command

xdg-mime install --novendor /usr/share/mime/packages/ica.xml