Factors Affecting Effective Inventory Control Essay Sample
Get Full Essay
Get access to this section to get all the help you need with your essay and educational goals.Get Access
Factors Affecting Effective Inventory Control Essay Sample
1.1 BACKGROUND TO THE PROBLEM
The early 1980s saw tremendous expansion in the area of network deployment. As companies realized the cost benefits and productivity gains created by network technology, they began to add networks and expand existing networks almost as rapidly as new network technologies and products were introduced. By the mid-1980s, certain companies were experiencing growing pains from deploying many different (and sometimes incompatible) network technologies. The problems associated with network expansion affect both day-to-day network operation management and strategic network growth planning. Each new network technology requires its own set of experts. In the early 1980s, the staffing requirements alone for managing large, heterogeneous networks created a crisis for many organizations. An urgent need arose for automated network management (including what is typically called network capacity planning) integrated across diverse environments.
The goal of performance management is to measure and make available various aspects of network performance so that Internet work performance can be maintained at an acceptable level. Example of performance variables that might be provided includes network throughput, user response times and line utilization. Performance management involves three main steps. First, performance data is gathered on variables of interest to network administrators. Second, the data is analyzed to determine normal (baseline) levels. Finally, appropriate performance thresholds are determined for each important variable so that exceeding these thresholds indicates a network problem worthy of attention. Management entities continually monitor performance variables. When a performance threshold is exceeded, an alert I generated and sent to the network management system. Each of the steps just described is part of the process to set up a reactive system.
When performance becomes unacceptable because of an exceeded user defined threshold, the system reacts by sending a message. Performance management also permits proactive methods: For example, network simulation can be used to project how network growth will affect performance metrics. Such simulation can alert administrators to impending problems so that counteractive measures can be taken The goal of security management is to control access to network resources according to local guidelines so that the network cannot be sabotaged (intentionally or unintentionally) and sensitive information cannot be accessed by those without appropriate authorization. A security management subsystem, for example, can monitor users logging on to a network resource and can refuse access to those who enter inappropriate access codes. Security management subsystems work by partitioning network resources into authorized and unauthorized areas.
For some users, access to any network resource is inappropriate, mostly because such users are usually company outsiders. For other (internal) network users, access to information originating from a particular department is inappropriate. Access to Human Resource files, for example, is inappropriate for most users outside the Human Resources department. Security management subsystems perform several functions. They identify sensitive network resources (including systems, files, and other entities) and determine mappings between sensitive network resources and user sets. They also monitor access points tosensitive network resources and log inappropriate access to sensitive network resources.
1.2 STATEMENT OF THE PROBLEM
Network performance and security being a major factor towards organizational goals, many organizations do not attain the level that is expected to be achieved. Some of the problems that lead to this failure are as follows:- 1. Most Network use currently is for accessing the Internet – Low bandwidth impacts performance.
2. Number of workstations growing rapidly, (Physical subnetting cuts down congestion)
3. Physical design of network (layout and technology can impact performance)
4. Distance (from Server)
5. hacking and tapping
Due to time constraints and the wideness of the topic under study, this report and so is the study, was mostly rely on network performance and security as the major effectiveness of network management.
1.3 RESEARCH QUESTIONS
The research study aims at answering the following research questions as far as effective communication is concerned.
▪ What are the objectives of network performance to control complex data in the organisation?
▪ To what extent network performance to control complex have been effective to organization?
▪ Which procedure and rules an organisation use to improve the efficent of network performance to control complex data?
▪ How will the company handle security threats?
1.4 RESEARCH OBJECTIVES:
The study is said to have the following main objective: –
▪ To find out whether systems (network) administrators have been able to achieve their intended objective of ensuring that networks are protected against all forms of risks & problems such as network failure, data loss, virus transmission and infection, misuse and illegal access of organization’s sensitive data, resources and services(insecurity), power problems and similar other problems.
In order to realize the purpose stated above, the following are the specific measurable objectives of undertaking the study: – ▪ To identify the key areas in managing networks.
▪ To identify common risks and problems facing computer networks, and their impacts to the organization. ▪ To find out the common solution to these problems.
1.5.SIGNIFICANCE OF THE STUDY:
It has been found that not all organization in Tanzania have been practicing network security and performance, therefore this research was undertaken so as to make the organization become aware on the subject of networks performance and security also show up its advantages and disadvantages. The study was being useful and great importance to the organization as well as the society for various reasons because it will; a) Avail data to individuals who may develop an interest in conducting similar studies b) Also the research was be able to show the organization would benefit in some areas among was be to • Reduce and control operating costs
• Implove the organizations,focus control and professionalism • Gain access to world-class capabilities
c) It will assist in improving the organization’s network performance, efficiency and effectiveness in dealing with all aspects of security threats. d) The findings were giving out the knowledge and understanding of the computer network technology these bases on network performance and security.
1.6SCOPE OF THE STUDY:
The study is conducting at post cooperation mbeya. The scope of this research was highly limited to the IT department, Accounts/finance department and marketing department. The organization is the symbol to reflect the organizations in Tanzania. Thus the study was concentrate on answering research objectives in the organization, which is used as a case study.
1.7 LIMITATIONS OF THE STUDY
There are things become obstacle to the researcher during the period of undertaking this study at post coperation mbeya. Some of the encountered problems mentioned his under • Some employees were very busy with their work hence it was difficult to meet for personal interviews. The situation forced the researcher to conduct interviews using questionnaires papers, because also response to the questionnaires was not good enough responded
• Unavailability of enough data due to the fact that some of the employees were reluctant to give some necessary information due to fear of bringing out confidential matters of the organization.
• Time allocated which was one semester (4 month) was not enough for collecting necessary data through observation; since for effective collection of data through observation method a long time required. Observation method, still,is the best way of collecting data because the researcher sees the problem he/she researchers on in its reality.
• Funds was a limiting factor, this was so in relation to the all requirements needed during the whole period of research studies,
• Lack of close supervision by supervisors because of them being far from field stat
2.1 Network performance definition:
Network Performance is a relative term and can mean many different things depending on the type of network. In general Network Performance refers to the level of Quality of Service of a telecommunications product. The following list gives examples of Network Performance measures for a circuit-switched network and one type of packet-switched network, ATM, Circuit-Switched Networks, in these networks; Network Performance is synonymous with the Grade of Service. 2.2 An overview of network performance:
TCP, the most dominant protocol used on the internet today, is a “reliable” “window-based” protocol. Under ideal conditions, best possible network performance is achieved when the data pipe between the sender and the receiver is kept full.
2.3 Bandwidth_ Delay Products (BDP):
The amount of data that can be in transit in the network, termed “Bandwidth- Delay-Product,” or BDP for short, is simply the product of the bottleneck link bandwidth and the Round Trip Time (RTT). BDP is a simple but important concept in a window based protocol such as TCP. Some of the issues discussed below arise because of the fact that the BDP of today’s networks has increased way beyond what it was when the TCP/IP protocols were initially designed. In order to accommodate the large increases in BDP, some high performance extensions have been proposed and implemented in the TCP protocol. But these high performance options are sometimes not enabled by default and will have to be explicitly turned on by the system administrators.
In a “reliable” protocol such as TCP, the importance of BDP described above is that this is the amount of buffering that will be required in the end hosts (sender and receiver). The largest buffer the original TCP (without the high performance options) supports is limited to 64K Bytes. If the BDP is small either because the link is slow or because the RTT is small (in a LAN, for example), the default configuration is usually adequate. But for a path that has a large BDP, and hence requires large buffers, it is necessary to have the high performance options.
Computing the BDP
To compute the BDP, we need to know the speed of the slowest link in the path and the Round Trip Time (RTT). The peak bandwidth of a link is typically expressed in Mbit/s (or more recently in Gbit/s). The round-trip delay (RTT) for a link can be measured with ping or traceroute, which for WAN links is typically between 10 msec and 100 msec. As an example, for two hosts with GigE cards, communicating across a coast-to-coast link over an Abilene connection (assuming a 2.4 Gbps OC-48 link), and the bottleneck link will be the GigE card itself. The actual round trip time (RTT) can be measured using ping, but we will use 70 msec in this example. Knowing the bottleneck link speed and the RTT, the BDP can be calculated as follows: 1,000,000,000 bits 1 byte 70 seconds
——————- * —— * ———- = 8,750,000 bytes = 8.75 MBytes 1 second 8 bits 1,000
Based on these calculations, it is easy to see why the typical default buffer size of 64 Kbytes would be way inadequate for this connection.
2.4 High Performance Networking Options
• TCP Selective Acknowledgments (SACK, RFC2018):
Sacks allow a receiver to acknowledge non-consecutive data. This is particularly helpful on paths with a large Bandwidth-Delay-Product (BDP). While SACK is now supported by most operating systems, it may have to be explicitly turned on by the system administrator. • Large Windows (RFC1323):
Without the support of this TCP enhancement, the buffer sizes that can be used by the application are limited to 64K Bytes. As we have seen in the BDP section above, this would be inadequate for today’s high speed WANs. On most systems, RFC1323 extensions are included but may require the system administrator to explicitly turn them on. • Maximum Buffer Sizes on the host:
Typically operating systems limit the amount of memory that can be used by an application for buffering network data. The host system must be configured to support large enough socket buffers for reading and writing data to the network. Typical UNIX systems include a default maximum value for the socket buffer size between 128 kB and 1 MB. For many paths, this is not enough, and must be increased. Please note that without RFC1323 “Large Windows” indicated above, TCP/IP does not allow applications to buffer more the 64 kB in the network, irrespective of the maximum buffer size configured. Default Buffer Sizes:
The “Maximum Buffer Size” mentioned above sets a maximum limit for the buffers (as you may have guessed by the name!). In addition to this, most operating systems have a system-wide “default buffer size” that is configured in. Unless an application explicitly requests a specific buffer size, it gets a buffer of the default size. Most system administrators set this value such that it is appropriate for a LAN, but would not necessarily be sufficient for WAN with a large BDP. System administrators usually have to make a judicious choice for default value and maximum value for the buffers. • Application Buffers:
If the default buffer size is not large enough, the application must set its send and receive socket buffer sizes (at both ends) to at least the BDP of the link. Some network applications support options for the user to set the socket buffer size (for example, Cray UNICOS FTP); many do not 2.5 Definiton of networkmanagement
Network management may mean different things to different people in different perspective. In some cases, it can be defined as the process of planning, allocate deploy and management of the network. In other cases, it is the management of data of large network so as to improve efficient and productivity. Sometimes, network management involves a distributed database, auto polling of network devices, and high-end workstations generating real-time graphical views of network topology changes and traffic. In general, network management is a service that employs a variety of tools, applications, and devices to assist human network managers in monitoring and maintaining networks. When looking net work management by considering five parts
1. a) Performance management
2. b) Fault management
3. c) Configuration management
4. d) Security management
5. e) Accounting management
6. 2.5.1Performance management
Here the network is supposed to work as required and the thing is optimal performance. Generally performance management includes the following things 7. i. Throughput, which means the amount of data which can be transferred ortransmittedwithin a given period of time in the system. 8. ii. Response time, these deals with the time difference between the sending terminal and the time to get feedback and it is advised that this time should be as minimum as possible to ensure system performance 9.
2.5.2 Fault management
10. This is the process of detecting and isolate or fix the faults or problems in the network 11. and can be done manually or automati
2.5.3 Configuration management
This is the process of setting the system to work in the desired form 2.5.4.
This is the process of recording, classifying, and reporting the errors encountered in the network.
2.6An overview of network Security
One thing to bear in mind is that, network security is a very complicated subject, historically only tackled by well-trained and experienced experts. However, as more and more organizations and companies become “wired”, an increasing number of them need to understand the basics of security in a networked world. According to Douglas I.J and P.J. Olson (1986), the control and security of the network is vital as threats may be present due to the geographical areas in which parts of the network are located. Under these geographical areas, we’ve three most essential security aspects namely:
Physical Security: This largely refers to the controlling access to computer systems and data by restricting access to the computer room.
Logical Security: This refers to the introduction of logical access security systems to protect communication networks against physical threat. Doyle, S makes an emphasis under this aspect in the sense that logical security access control ensure that access through computers and terminals to an organization’s data, programs and information is controlled in some way so that only authorized access is allowed (2000, p. 214).
Network Security: Most organizations make use of computer networks. This raises many additional security problems such as hacking and tapping. Of the above mentioned security aspects, the study is going to discuss in a large context logical security, and network security as a part of it as these two security components do carry major considerations under any issue of security. But before carrying on discussing about security issues, it’s better for us first to understand the meaning of security.
2.7 Definition of Network Security
According to Carnegie Mellon University (2001) Computer (network) security is the process of preventing and detecting unauthorized use of your computer (network). Prevention measures help you to stop unauthorized users (also known as “intruders”) from accessing any part of your computer system. Detection helps you to determine whether or not someone attempted to break into your system, if they were successful, and what they may have done.
2.8 Definition of Network Security
It’s very important to understand that in security, one simply cannot say, “what’s the best firewall?” There are two extremes: absolute security and absolute access. The closest we can get to an absolutely secure machine is one unplugged from the network, power supply, locked in a safe, and thrown at the bottom of the ocean. Unfortunately, it isn’t terribly useful in this state.
A machine with absolute access is extremely convenient to use: it’s simply there, and will do whatever you tell it, without questions, authorization, passwords, or any other mechanism. Unfortunately, this isn’t terribly practical, either: the Internet is a bad neighborhood now, and it isn’t long before some bonehead will tell the computer to do something like self-destruct, after which, it isn’t terribly useful to you. Every organization needs to decide for itself where between the two extremes of total security and total access they need to be. A policy needs to articulate this, and then define how that will be enforced with practices and such. Everything that is done in the name of security, then, must enforce that policy uniformly
2.8 Types and sources of Network:problem
Since we’ve now covered enough background information on networking, then we can actually get into the security aspects of all of this. First of all, we’ll get into the types of threats there are against networked computers, and then some things that can be done to protect the organizations (and individuals as well) against various threats
DoS (Denial-of-Service) attacks are probably the nastiest, and most difficult to address. These are the nastiest, because they’re very easy to launch, difficult (sometimes impossible) to track, and it isn’t easy to refuse the requests of the attacker, without also refusing legitimate requests for service.
The premise of a DoS attack is simple: send more requests to the machine than it can handle. There are toolkits available in the underground community that make this a simple matter of running a program and telling it which host to blast with requests. The attacker’s program simply makes a connection on some service port, perhaps forging the packet’s header information that says where the packet came from, and then dropping the connection. If the host is able to answer 20 requests per second, and the attacker is sending 50 per second, obviously the host will be unable to service all of the attacker’s requests, much less than any legitimate requests. Some things that can be done to reduce the risk of being stung by a denial of service attack include:
Not running your visible-to-the-world servers at a level too close to capacity
Using packet filtering to prevent obviously forged packets from entering into your network address space
Keeping up-to-date on security-related patches for your host’s operating system.
2.8.2 Unauthorized Access
“Unauthorized access” is a very high-level term that can refer to a number of different sorts of attacks. The goal of these attacks is to access some resource that your machine should not provide the attacker. For example, a host might be a web server, and should provide anyone with requested web pages. However, that host should not provide command shell access without being sure that the person making such a request is someone who should get it, such as a local administrator.
2.8.3 Executing Commands Illicitly
It’s obviously undesirable for an unknown and untrusted person to be able to execute commands on your server machines. There are two main classifications of the severity of this problem: normal user access, and administrator access. A normal user can do a number of things on a system (such as read files, mail them to other people, etc.) that an attacker should not be able to do. This might, then, be all the access that an attacker needs. On the other hand, an attacker might wish to make configuration changes to a host (perhaps changing its IP address, putting a start-up script in place to cause the machine to shut down every time it’s started or something similar). In this case, the attacker will need to gain administrator privileges on the host.
2.8.4 Confidentiality Breaches
We need to examine the threat model: what is it that you’re trying to protect yourself against? There is certain information that could be quite damaging if it fell into the hands of a competitor, an enemy, or the public. In these cases, it’s possible that compromise of a normal user’s account on the machine can be enough to cause damage.
2.8.5 Destructive Behavior
Among the destructive sorts of break-ins and attacks, there are two major categories:
2.8.6 Data Diddling
The data diddler is likely the worst sort, since the fact of a break-in might not be immediately obvious. Perhaps he’s toying with the numbers in your spreadsheets, or changing the dates in your projections and plans. Maybe he’s changing the account numbers for the auto-deposit of certain paychecks. In any case, rare is the case when you’ll come in to work one day, and simply know that something is wrong.
2.8.7 Data Destruction
Some of those perpetrate attacks are simply twisted jerks who like to delete things. In these cases, the impact on your computing capability, and consequently your business, can be nothing less than if a fire or other disaster caused your computing equipment to be completely destroyed.
An attacker gains access to your equipment through any connection that you have to the outside world. This includes Internet connections, dial-up modems, and even physical access. Thus in order to be able to adequately address security, all possible avenues of entry must be identified and evaluated. The security of that entry point must be consistent with your stated policy on acceptable risk levels.
2.8.8Practices That Can Help Prevent Security Disasters
From looking at the sorts of attack that are common, we may now divide a relatively short list of high-level practices that can help prevent security disasters, and to help control the damage in the event that preventative measures were unsuccessful in warding off an attack.
2.8.9 Using Backups
Operational requirements should dictate the backup policy, and this should be closely coordinated with a disaster recovery plan, such that if there appears a disaster such as fire or floods, you’ll be able to carry on your business from another location. Similarly, these can be useful in recovering your data in the event of an electronic disaster: a hardware failure, or a break-in that changes or otherwise damages your data.
2.9 Don’t misallocate data
This doesn’t occur to lots of people or organizations. As a result, information that doesn’t need to be accessible from the outside world sometimes is, and this can needlessly increase the severity of a break-in dramatically.
2.9.1 Avoid systems with single points of failure
Any security system that can be broken by breaking through any one component isn’t really very strong. In security, a degree of redundancy is good, and can help you protect your organization from a minor security breach becoming a catastrophe.
2.9.2 Stay current with relevant operating system patches
Be sure that someone who knows what you’ve got is watching the vendors’ security advisories. Exploiting old bugs is still one of the most common (and most effective) means of breaking into systems. 2.9.3Have someone on staff be familiar with Security practices Having at least one person who is charged with keeping abreast of security developments is a good idea. This need not be a technical wizard, but could be someone who is simply able to read advisories issued by various incident response teams, and keep track of various problems that arise. Such a person would then be a wise one to consult with on security related issues, as he’ll be the one who knows if web server software version such-and-such has any known problems, etc. This person should also know the “dos” and “don’ts” of security from reading different security books and magazines.
A firewall is simply a group of components that collectively form a barrier between two networks. Therefore in order to provide some level of separation between an organization’s intranet and the Internet, firewall(s) should be employed. There are three basic types of firewalls:
2.9.5 Application Gateways
They are sometimes known as proxy gateways. This software runs at the Application Layer of the ISO/OSI Reference Model, hence the name. Clients behind the firewall must be prioritized (that is, must know how to use the proxy, and be configured to do so) in order to use Internet services. Traditionally, these have been the most secure, because they don’t allow anything to pass by default, but need to have the programs written and turned on in order to begin passing traffic. These are also typically the slowest, because more processes need to be started in order to have a request serviced 2.9.6. Packet Filtering
Packet filtering is a technique whereby routers have ACLs (Access Control Lists) turned on. By default, a router will pass all traffic sent via it, and will do so without any sort of restrictions. Employing ACLs is a method for enforcing your security policy with regard to what sorts of access you allow the outside world to have to your internal network, and vice versa. There is less overhead in packet filtering than with an application gateway, because the feature of access control is performed at a lower ISO/OSI layer (typically, the transport or session layer). Due to the lower overhead and the fact that packet filtering is done with routers, which are specialized computers optimized for tasks related to networking, a packet filtering gateway is often much faster than its application layer cousins. In some of these systems, new connections must be authenticated and approved at the application layer.
Once this has been done, the remainder of the connection is passed down to the session layer, where packet filters watch the connection to ensure that only packets that are part of an ongoing (already authenticated and approved) conversation are being passed. Other possibilities include using both packet filtering and application layer proxies. The benefits here include providing a measure of protection against your machines that provide services to the Internet (such as a public web server), as well as provide the security of an application layer gateway to the internal network. Additionally, using this method, an attacker, in order to get to services on the internal network, will have to break through the access router, the bastion host, and the choke router.
2.9.7The best firewall for organizations
Lots of options are available, and it makes sense to spend some time with an expert, either in-house, or an experienced consultant who can take the time to understand your organization’s security policy, and can design and build a firewall architecture that best implements that policy. Other issues like services required, convenience, and scalability might factor in to the final design.
2.9.8.Secure Network Devices:
It is important to remember that the firewall is only one entry point to your network. Modems, if allowed to answer incoming calls, can provide an easy means for an attacker to sneak around your front door (or, firewall).
2.9.9 Secure Modems; Dial-Back Systems
If modem access is to be provided, this should be guarded carefully. The terminal server, or network device that provides dial-up access to the network needs to be actively administered, and its logs need to be examined for strange behavior. Its password need to be strong, not ones that can be guessed. Accounts that aren’t actively used should be disabled. In short, it’s the easiest way to get into your network from remote: guard it carefully.
A feature that is being built into some routers with the ability to session encryption between specified routers. Because traffic traveling across the Internet can be seen by people in the middle who have the resources (and time) to snoop around, these are advantageous for providing connectivity between two sites, such that there can be secure routes.
2.10.1Virtual Private Networks (VPNs)
Given the ubiquity of the Internet, and the considerable expense in private leased lines, many organizations have been building VPNs (Virtual Private Networks). Traditionally, for an organization to provide connectivity between a main office and a satellite one, an expensive data line had to be leased in order to provide direct connectivity between the two offices. Now, a solution that is often more economical is to provide both offices connectivity to the Internet. Then, using the Internet as the medium, the two offices can communicate. The danger in doing this, of course, is that there is no privacy on this channel, and it’s difficult to provide the other office access to “internal” resources without providing those resources to everyone on the Internet.
VPNs provide the ability for two offices to communicate with each other in such a way that it looks like they’re directly connected over a private leased line. The session between them, although going over the Internet, is private (because the link is encrypted), and the link is convenient, because each can see each others’ internal resources without showing them off to the entire world. A number of firewall vendors are including the ability to build VPNs in their offerings, either directly with their base product, or as an add-on. If you need to connect several offices together, this might very well be the best way to do it.
As we have seen, there are many impacts of network technology that are facing the Tanzanian companies and organization, but a good example of all, and one of major impacts is network performance and security. Network management itself is a very difficult and wide topic. Everyone has a different idea of what is meant by “network management”, and what levels of risk are acceptable. The key for building a secure network is to define what security means to the organization.
Once that has been defined, everything that goes on with the network can be evaluated with respect to that policy. It’s important to build systems and networks in such a way that the user is not constantly reminded of the security system around her or him. Users who find security policies and systems too restrictive will find ways around them. It’s important to get their feedback to understand what can be improved, and it’s important to let them know why what have been done has been, the sorts of risks that are deemed unacceptable, and what has been done to minimize the organization’s exposure to them. To make Network Security and performance are everybody’s responsibility, and only I, you, and he or she, together with our cooperation, an intelligent policy, and consistent practices, will it be achievable.
3.1 Research design
This research has been designed in the single case study, and our study will be in the exploratory research design. This will enable me to obtain all the required data within the study period specified by the institute. 3.2.Research technique
. The study was using both qualitative and quantitative techniques to conduct the study, Qualitative research technique was applied in collecting and analyzing non-numerical data such as information concerning attitudes, people’s understanding and behavior of respondents in the problem area. For quantitative (numerical data) data, simple calculations and tables were used.Questionnares for example were coded, tallied and then rates of response for each item was calculated and computed into percentage. For qualitative (non-numerical data) data, information was summarized and then displayed by using tables, and give a report that is almost near to the truth due to its pictorial representation. 3.3 Population
Using the above-mentioned design, post coperaion located in Mbeya was chosen as a case study in conducting this research, whose findings were assumed to reflect or apply to all other organizations of the same nature. It is therefore the expectations of the researcher that the findings generated from this study would be used for generalization purposes.
. Report have the following sampling procedure was be employed in this study:
• Simple random sampling
This involves the random selection of employee (Staff) from the Information, Technology (System admistrators) and other departments of corresponding area of field.
. A reasonable sample was being selected to include sixty percent of the population and at least fourty per cent of the selected departments were be included in the data collection
3.6.Data Collection Methods
Data was be carefully collected through inquiry using various methods for the purpose of obtaining relevant information and data concerning the study, in collecting data the study was use the following methods
. The interview was be carried out with staff of Information technology and other departments, was be asked questions in relation to the objectives of the study
. This was involve observing the way the network of the organization functions, its structure and the technology used to make it. A researcher was being involved in day to day activities so as to observe and collect information concerning the study. This technique was help to collect information concerning altitude, behaviour and perception of respondents in the problem area. The observation was being systematically planned to ensure validity and reliability.
This was involve going through the organization files or laid down policies, rules and procedures with regard to the use of computer facilities to see if they are constantly followed. Also this method was be employed to collect secondary data mostly from the organisation’s documents such as Network Requirement Specification and related documents various other documents was be find by the researcher, these was include further information that was obtained from manuals, procedures, book, Journals and others.
The casual questions was be prepared by the researcher and distributed to the employees in the area of field particularly IT department and other departments. The questionnaire was being distributed according to the number of respondent obtained from the sample. And some of the questionnaires were being returned with answer after being filled by the correspondents.English was be used in framing the research questions. The questionnaire was consisting of both open-ended and closed book, journals and others. 3.7.Types of data to be collected
The study is designed to use both primary and secondary data. 3.7.1 Primary data
This was being collected through questionnaires, observation, existing reports and personal interviews in various organizations’ departments and it’s surrounding, which are related directly or indirectly to the organization.
These was be extracted through the revision of various documents for instance the strategic plan of the IT department, text books, journals, periodicals, management report and other valued published information that is expected to be available.
4.1 Presentation of the Findings
This chapter presents analyses and discusses the research findings as per study. Data collected form the questionnaires, interviews, available documents and direct observations were summarized, analysed and presented to show the situation. Following the research questions, below are the answers for the research questions obtained from the study:
4.1.1 Major network resources/ tools available in an organization.
WORKSTATIONS: The organization has got many computers which are connected to the network.most of the computer are hp, . The Information Technology
Department section is responsible to provide to each employee a computer which is connecting to the Network. It has also been a most common way of maintain the computers, like repairing, updating as well as to change, if the computer has failed completely. System Admistator team make sure that the backup is done before making any repair or changing the computer for the user. PRINTERS: With the organization there are also a number of network printers which made ready for sharing through standard TCP/IP port, not only that but also there some workers who do not share the printers, so the configuration is through local port; these staff are Marketidepartment, and cash account is a part of post cooperation.
SERVERS: Servers are the heart and soul of today’s computing infrastructure. Running mission critical applications as well as core IT services such as e-mail, file print and databases services, availability and performance of post Coperation Servers are critical to ensure smooth running of organization. The organization has about 2 running servers. The server are well structure in a server farm (room) and well secured from power instability by a highly voltage UPS. Maintenance of these servers is done by the systems Administrator.
SWITCHES: Switches are the back bone of an organization’s LAN. Any problem in the switchers affects a large proportion of LAN users. By implementing managed switches the organization improve the security of the LAN. Not only that but there some utilities implemented during configuration of those switches like Switch port Mapper which helps the administrator to quickly find out the list of devices connected to the switch ports. Other utilities help the LAN administrator monitor and troubleshoot switch ports for traffic, utilization and error verification.
ROUTERS: WAN links are usually the most expensive part of the network, and managing bandwidth allocation can be complex. The organization has one router this for internet services.
Firewall: This is also one the network tool especially in Network Security. It has the ability to filter an authorized packet to pass through the network. Cabling System: A well planned structure cabling system is a crucial component of any network infrastructure. It assures: • User expectations (uptime and efficiency)
• Security requirements
• Environmental and electrical requirements of the System (UPS, A/C, control temperature and fire suppression. The structure cabling system of Post coperation building does not satisfy the requirements of a proper structure cabling system. At some point this may affect:
4.1.2 Effectiveness of network performance:
Network managers in the organization document the services, network parameters, server information for each Post coperation Servers and make it available to the authorized personnel of the Post Coperation; and this is not only for servers, it applies also to thepost coperation System network Infrastructure (Routers, Switches, Firewalls and others).This is what make the improvement in the availability, manageability and planning of the Network Infrastructure. However the accessibility of resources like printers, file server etc is faster, secure and accurate simply because LAN administrator configured a DHCP server to its essential to determine and plan how many IP address will be used by hosts that offer services to the network.
In this manner a continues set of IP addresses will be excluded from the DHCP scope and only hosts clients will be obtain IP address from DHCP server. Applying this methodology will avoid constant configuration changes in DHCP server, printers and client computers. Helpdesk usually configure printers with static IP addresses. Moreover Post coperaion System Network has got a mechanism that allows network administrators to: • View log event in a centralized way (eg. Syslog server. • Create detection policies for Unwanted registered log events • Implement corrective actions
• Synchronize server clock.
DHCP server (Contains IP addresses, lease duration, and associated TCP/IP configuration information. The DHCP server listens for client requests and processes them) 4.1.3 Network Performance Management Tools
Under this category the following tools were used:
Disk Cleanup: Sometimes the computer system may run low or completely out of disk space. Under this circumstance, the system administrator used to free up some disk space by removing all unnecessary files such as temporary files, Internet cache files,programs,and unused programs and unnecessary program features that could safely be deleted to improve system (performance) speed. Disk Cleanup utility was employed to do the task. Due to the number of PCs that were available to simply the task the utility was scheduled to run automatically on weekly basis. Scandisk: This utility was used to check for disk status. The utility checks for disk errors and attempted recovery of bad sectors. One big advantage of scandisk is that even if the disk problems have not been solved, the damaged parts are marked such that upon saving data these parts are escaped. This has been proved to reduce the chance of loosing data unintentionally.
Disk defragmenter: On long use of a computer system, files get scattered all over the disk volume a condition called file fragmentation. File fragmentation affects computer speed. Disk Defragmenter rearranges files, programs, and unused space on your computer’s hard disk, so that programs run faster and files open more quickly. This was done once in a month.
Windows Update: Computers were updated now and then to keep them current. This was done with Windows update utility to computers that had internet connection. For stand alone PCs relevant update CDs or downloads were used. Windows Update is a catalog of items such as drivers, patches, help files, and Internet products that can be downloaded to keep computer up to date. 4.1.4 Network Security Management:
The network resources/ tools have been certainly considered effective as regards to the responses obtained from the methods used in collecting data. Network management has been declared to be effective due to following facts;
• Network security is good and well maintained
• Performance of network in relation to response time considered good
4.1.5 Security wise
Not every n user in post coperation has an account to log in to the system and to log in to the intranet mail server. The Evidence Unit Staff are configured to the Postcoperation –OTP domain, while the other staffs are configured to the post coperation Domain .The ID’s are created by the System administrators of both sides. Also the Post coperation Network prevented from the outsiders through Cisco Pix Firewall. Not only that but also the organization run live Anti virus update programs. Actually the organization uses Symantec corporate edition. Antivirus and this program have a direct link with Symantec Company and it updates automatic.
Mail server security: In order to provide privacy and data integrity to post coperation System Ltd, employees accessing the mail through the Internet via the Post coperation System , recently they configured web- mail server to only its accessed through a standard encrypted channel HTTPS(Hypertext Transfer Protocol Secure Socket Layer)
Security Update Services: In order the organization to avoid the responsibility of the user obtaining and installing critical security updates from an internet sources. In this way the organization is able to control the software version for each client computer and its authenticity, improve the availability of the network and save bandwidth at peak hours for the Post coperation Link. 4.1.6 Network Security Management Tools
Internal Security: Under this category the following tools were used: Passwords: Since the network configuration was peer to peer, system security was much relying on password on share level basis. Every important network resource that was to be shared among users or workgroups was password protected. To resources were users need only to see data, not to modify it, read only access was given. In other cases the entire drive was password protected. All public folders were kept on a separate, shared folder. Removing Sharing Option: To enhance security sometimes the sharing option was totally removed from some other resources. This is the most secure environment available for Window 98/me computers. Use Of Anti-Virus Programs: Anti-virus programs were used to protect the network from viral attack. The most preferred anti-virus software used is AVG Anti-virus from Semantic Corporation.
The software is constantly updated and all necessary settings such as auto E-mail scan, auto start up, and floppy disk check on inserting are well configured. Spyware Cleaning: To stay away from spywares, a program known as Spy Gold Cleaner was used. A spyware is a program used by hackers in gathering information about your PC or network without your knowledge and discloses it out.Spywares pose a great threat in network security. External Security: The security mechanisms covered above are primarily concerned with internal security, which is, preventing users on the same local area network (LAN) from accessing files and other resources that they do not need or not allowed to. In case of external security, a firewall is used.
A firewall is hardware or software product that is designed to protect a network from unauthorized access by outside parties particularly via internet. It is a security system that stands between the local network and the outside world, the Internet, etc. A firewall screens all inbound and outbound traffic according to a set of rules that you define. Firewalls or gateways are the best available protection for a network with outside connections as all traffic that enters or leaves the network must pass through it and can therefore be monitored. Used with good password rules, backup, and anti-virus software, they provide protection from external attacks. A tight security policy and a firewall allow system administrators to assure their clients the system is safe as well as providing peace of mind for you.
4.1.7 IMPORTANCE OF NETWORK PERFORMANCE,MANAGEMENT AND SECURITY: Network performance and security is very essential an any networked organization. Through network performance,management and security, System administrator can be able to quantify measure, report analyzing and control the performance of different network components like links, routers and hosts as well as end to end abstractions such as path through the network. However network performance allows a network administrator to track which devices are on the managed network and hardware and software configuration of these devices. The above explained importance of Network performance and Security are the once which gave the system to be well managed and controlled and without network performance and Security management the organization may not be successful.
4.2 DATA ANALYSIS
In relation to the proposed methods of data collection, in this case the researcher applied the following methods in collecting data, namely, Interview, Questionnaire, and Documentation and observation. Basing on the data collected through these methods, the prime objective for all these was to come up with level of confidence reflecting the effectiveness network performance and security in Tanzanian Organization’s. Data from Questionnaire: Here the study wanted to know two major things: • Whether end-users know any thing about network performance and security management programmed (Data collected from end-users) • Major areas considered critical in network Performance and security management (Data collected from a system administrator) Table 4.2.1: Data collected from end-users
Item: To see whether end-users know any thing about network performance and security management |Value Label |No. of respondents |Percent (%) | |(a)Yes |30 |60 | |(b)No |6 |32 | |(c)Other(Specify) |_ |_ | |Not responded |3 |8 | | |39 |100 |
Note: One respondent from IT Department was removed.
It can be seen that 40% of the respondents have never been involved in any matter relating to computer, network performance and security management programmed and the remaining 60% have been involved probably because of their positions (example: Department heads).
Table 4.2.2: Data collected from a system administrator
Item: To identify major areas considered critical in network Performance and security management |Item |No. of respondents | Total |Percent (%) | |Security Management | 1 | 1 |100 | |Performance Management | 1 | 1 |100 |
Generally the IT department regards all basic conceptual areas given forth by OSI as critical in network Performance and security management and Systems administration.
Data Collected From Interview & Questionnaires
This paragraph is intended to analyse the respond of the staffs through their answers from questionnaires returned to researcher. Questionnaires were distributed to the sample and the results were obtained from the sample size as shown in the table below;
Table 4.2.3; Shows the number of staff selected as sample
|Departments |No of employees |Selected as sample | Sampling | Sampling in % | |System Admistrtor | 15 | 10 | 10/24*100 | 41.7% | |Sales | 12 | 7 | 7/24*100 | 29.2% | |Marketing | 9 | 7 |7/24*100 | 29.2% | |Total | 36 | 24 | | 100% |
Consider the graph below;
Graph 4.1 Shows the raise in percentage between selected sample and sampling in Percentage in different departments
Data from Observation
Basically, the observation was evaluated on the following areas; • Observation on how the server farm structured and managed. On this, the research managed to observe the following; physical structure of the server room, that is cabling systems, number of network resources like routers switches, servers and electrical equipments like UPS. The line connection to the outside world and the way devices are connected. • Observation on how the network works inside the organization. (Locally) This refers the sharing of resources within the organization itself like printers, files and the use of intranet, where post cooperation uses one of server for Email services.
The researcher observed how secure is, how fast is and how is it accessible. The back up also is done by server available inside the building too. • Observation on how the vital data transferring within the organization The researcher observer observed was that, these data are kept in their respective servers and there some application programs concerned about these records where each user has got ID and password to access the shared data. Documentation
Under this method, with the cooperation and kind assistances the researcher got from the Post Staff, was able to review different documents particularly those concern about organization vital data as well as computer network in organization. Due to the fact that the organization is well advanced in Information Technology most of records are kept electronically in respective database; therefore the researcher was able to get some documents through website. Mostly the researcher got data by visiting Postcoperation Network operation center where network Performance and Security documents available. Other means were news letters and journals from press unit. Actually the connection is much advanced to compare with other organizations.
4.3.NTERPRETATION OF THE FINDINGS
Basing on and in the relation to the objectives of the study stated in chapter one, here is the interpretation of the findings obtained from the study. As far the research finding are concerned, the researcher was able to identify them through the network resources/tools described, the common tools/ resources identified are namely; printers, workstations, switchers, routers, servers, firewalls. In determining how contributively the use of effective network performance and security are to overall operations of the organization, the researcher relied much upon the performance of controlling vital data through computer network particularly Post coperation where the researcher was conducting his studies.
Large group of the network users commented on the accuracy and promptness of data transferred to them, such as acquisition, annually maintenance services (AMC) reports, financial reports, e-mail as well as the security of those records. However this has led to many employees enjoying the network and come out with good work from their assignments. Moreover due to big number of users demanding the network for the better way of doing their work, especially surfing and online studying the organization at large started to expand its services by updating and installation more network performance and security management utilities tools to the servers.
4.3.1 Interpretation of data
The organization especially the IT department performing at “Very good” level in terms of organizational network performance and Security management both within and outside the organization. There were some groups which were either just satisfied or not satisfied with the performance accessibility and security on organization network management. The individuals commented on excellent performance, security and accessibility.
5.0 CONCLUSION AND RECOMMENDATION;
According to the analysis and findings from Post corperation mbeya , Network performance and Security use it for their daily processing activities of their data transferred acquisition, annually maintenance services (AMC) reports, financial reports, e-mail, the Researcher through this study conclude as follows:-
Postcoperation Mbeya, One of the leading and most competitive information and communication technology (ICT). Through the effectiveness of Network performance and Security they managed to store, retrieve data and files from customers/members. So through it enabled them to perform their daily work/activities quickly, accurately, efficiently and timely.
At the other advanced stage, it was networked through Local Area Network (LAN). This enabled them to access information throughout all Post cooperation Mbeya departments.
Furthermore Network performance and security management System leads at Post coperation to improve efficiency on how to manage massive files and data and process them in a better way and provide better pension services to its members/customers. On top of that it was observed that in sometimes there is network slowly, so processing activities was somehow difficult because in order to transferred file the network should be very fast. The Researcher observed that some staff were not well trained to meet the existing application, training was given to a few of them, where those who are not trained need some help, however, they have different level of knowledge in computer based sys The study assisted the Researcher to have practical training to integrate the theoretical to the practical aspect of the IT field.
After researcher studies the The o network performance Mnanagemnt and Security are very important in daily operation of post coperation activities the following was the recommendation: • Client-Server networks provide centralized control over access to network resources. In addition access security can be customized to meet organizational requirements, which helps to protect sensitive data from loss, destruction, theft, or unauthorized disclosure. Above all, most client-server environments have discretionary access control which allows a system administrator to inactively grant or deny access to a file (resource).This has further increased system security as well as performance.
▪ Post should improve the network facilities to its offices so as to speed up transmission of data and signal from one place to another. This will solve the delay of operation and member/customers services on the Network management system. If they will do so the problems will be solved.
▪ In the case of security, Post cooperation should also improve the security so as to ensure that all massive information as well as file and data can not be easily accessible by unauthorized user or leaders. To implements this the organisation must change staff ID’s and passwords continuously so as to make them hard to get access from the hackers
▪ Post copertion coperation staff must set seminar and continuous computer training so as to cop with the system and change of the technologies on how to manage Network system, as well as other application packages in steady of calling a consultant from other organization’s when the system fail. This will improve efficiency of data and file processing as well as the reducing the cost to the organisation.
a) Curtin, M (1997) Introduction to Network Security [online] Available: www.interhack.net/pubs/network-security b) Douglas, I.J and P.J. Olson (1986). Audit and Control of Computer Networks. Manchester: NCC Publications (p.21) c) Doyle, S (2000). Understanding Information and Communication Technology. Cheltenham: Stanley Thornes (Publishers) Ltd. d) Home Network Security [online] Available: www.cert.org/tech_tips/ home_network.html e) Jones, R.E and Robbi J.L (1986). Discovering Technology, Communication. USA: Harcourt Brace Jovanovich, Inc. f) Pulschen, D.M.; Szymanski D.P., & Szymanski R.A (1991). Computers and Information Systems: with hands-on software tutorials. Upper Saddle River: Prentice-Hall, Inc. g) Supplement on Information and Communication Technology [online] Available: www.nam-csstc.org/spain/manuals_e-readiness/ supplement /S_Chapter1.htm#ComputerHistory h) Thompson, B (2002, October 6). Why the poor need technology. BBC News World Edition e) http://www.psc.edu/networking/projects/tcptune/ 14/9/2005
f) http://www.ncne.org/jumbogram/mtu_discovery.php 14/9/2005