Recently Added Questions & Answers

 

Structure Of Cache -

 

➢ Cache is organized into pages, which is the smallest unit of cache allocation. The size of a 

cache page is configured according to the application I/O size.

➢ Cache consists of the data store and tag RAM.

➢ The data store holds the data whereas the tag RAM tracks the location of the data in the 

data store (see Fig 1.22) and in the disk.

➢ Entries in tag RAM indicate where data is found in cache and where the data belongs on 

the disk.

➢ Tag RAM includes a dirty bit flag, which indicates whether the data in cache has been 

committed to the disk.

➢ It also contains time-based information, such as the time of last access, which is used to 

identify cached information that has not been accessed for a long period and may be freed 

up.

 

Read Operation with Cache

➢ When a host issues a read request, the storage controller reads the tag RAM to determine 

whether the required data is available in cache.

➢ If the requested data is found in the cache, it is called a read cache hit or read hit and 

data is sent directly to the host, without any disk operation (see Fig 1.23[a]).This provides 

a fast response time to the host (about a millisecond).

➢ If the requested data is not found in cache, it is called a cache miss and the data must be read from the disk. The backend controller accesses the appropriate disks and retrieves the requested data. Data is then placed in cache and is finally sent to the host thorugh the front end controller.

➢ Cache misses increase I/O response time.

➢ A Pre-fetch, or Read-ahead, algorithm is used when read requests are sequential. In a 

sequential read request, a contiguous set of associated blocks is retrieved. Several other 

blocks that have not yet been requested by the host can be read from the disk and placed 

into cache in advance. When the host subsequently requests these blocks, the read 

operations will be read hits.

➢ This process significantly improves the response time experienced by the host.

➢ The intelligent storage system offers fixed and variable prefetch sizes.

➢ In fixed pre-fetch, the intelligent storage system pre-fetches a fixed amount of data. It is 

most suitable when I/O sizes are uniform.

➢ In variable pre-fetch, the storage system pre-fetches an amount of data in multiples of the size 

of the host request.

 

Write Operation with Cache - 

 

➢ Write operations with cache provide performance advantages over writing directly to 

disks.

➢ When an I/O is written to cache and acknowledged, it is completed in far less time (from 

the host’s perspective) than it would take to write directly to disk.

➢ Sequential writes also offer opportunities for optimization because many smaller writes 

can be coalesced for larger transfers to disk drives with the use of cache.

➢ A write operation with cache is implemented in the following ways:

➢ Write-back cache: Data is placed in cache and an acknowledgment is sent to the host 

immediately. Later, data from several writes are committed to the disk. Write response 

times are much faster, as the write operations are isolated from the mechanical delays of 

the disk. However, uncommitted data is at risk of loss in the event of cache failures.

➢ Write-through cache: Data is placed in the cache and immediately written to the disk, 

and an acknowledgment is sent to the host. Because data is committed to disk as it arrives,

the risks of data loss are low but write response time is longer because of the disk 

operations.

➢ Cache can be bypassed under certain conditions, such as large size write I/O.

➢ In this implementation, if the size of an I/O request exceeds the predefined size, called 

write aside size, writes are sent to the disk directly to reduce the impact of large writes 

consuming a large cache space.

➢ This is useful in an environment where cache resources are constrained and cache is 

required for small random I/Os.

 

 

RAID Levels

➢ RAID Level selection is determined by below factors:

✓ Application performance

✓ data availability requirements

✓ cost

➢ RAID Levels are defined on the basis of:

✓ Striping

✓ Mirroring

✓ Parity techniques

➢ Some RAID levels use a single technique whereas others use a combination of techniques.

➢ Table shows the commonly used RAID levels 

 

1 RAID 0

➢ RAID 0 configuration uses data striping techniques, where data is striped across all the disks 

within a RAID set. Therefore it utilizes the full storage capacity of a RAID set.

➢ To read data, all the strips are put back together by the controller.

➢ Fig 1.14 shows RAID 0 in an array in which data is striped across five disks.

➢ When the number of drives in the RAID set increases, performance improves because more 

data can be read or written simultaneously.

 

RAID 1

➢ RAID 1 is based on the mirroring technique.

➢ In this RAID configuration, data is mirrored to provide fault tolerance (see Fig 1.15). A

➢ RAID 1 set consists of two disk drives and every write is written to both disks.

➢ The mirroring is transparent to the host.

➢ During disk failure, the impact on data recovery in RAID 1 is the least among all RAID 

implementations. This is because the RAID controller uses the mirror drive for data recovery.

➢ RAID 1 is suitable for applications that require high availability and cost is no constraint.

 

Nested RAID

➢ Most data centers require data redundancy and performance from their RAID arrays.

➢ RAID 1+0 and RAID 0+1 combine the performance benefits of RAID 0 with the redundancy 

benefits of RAID 1.

➢ They use striping and mirroring techniques and combine their benefits.

➢ These types of RAID require an even number of disks, the minimum being four.

 

RAID 3

➢ RAID 3 stripes data for high performance and uses parity for improved fault tolerance.

➢ Parity information is stored on a dedicated drive so that data can be reconstructed if a drive 

fails. For example, of five disks, four are used for data and one is used for parity.

➢ RAID 3 always reads and writes complete stripes of data across all disks, as the drives operate 

in parallel. There are no partial writes that update one out of many strips in a stripe.

➢ RAID 3 provides good bandwidth for the transfer of large volumes of data. RAID 3 is used in 

applications that involve large sequential data access, such as video streaming.

 

RAID 4

➢ RAID 4 stripes data for high performance and uses parity for improved fault tolerance. Data 

is striped across all disks except the parity disk in the array.

➢ Parity information is stored on a dedicated disk so that the data can be rebuilt if a drive fails. 

Striping is done at the block level.

➢ Unlike RAID 3, data disks in RAID 4 can be accessed independently so that specific data 

elements can be read or written on single disk without read or write of an entire stripe. RAID 

4 provides good read throughput and reasonable write throughput.

 

RAID 5

➢ RAID 5 is a versatile RAID implementation.

➢ It is similar to RAID 4 because it uses striping. The drives (strips) are also independently 

accessible.

➢ The difference between RAID 4 and RAID 5 is the parity location. In RAID 4, parity is 

written to a dedicated drive, creating a write bottleneck for the parity disk

➢ In RAID 5, parity is distributed across all disks. The distribution of parity in RAID 5 

overcomes the Write bottleneck. Below Figure illustrates the RAID 5 implementation.

➢ Fig 1.18 illustrates the RAID 5 implementation.

➢ RAID 5 is good for random, read-intensive I/O applications and preferred for messaging, data 

mining, medium-performance media serving, and relational database management system 

(RDBMS) implementations, in which database administrators (DBAs) optimize data access.

 

 RAID 6

➢ RAID 6 includes a second parity element to enable survival in the event of the failure of two 

disks in a RAID group. Therefore, a RAID 6 implementation requires at least four disks.

➢ RAID 6 distributes the parity across all the disks. The write penalty in RAID 6 is more than 

that in RAID 5; therefore, RAID 5 writes perform better than RAID 6. The rebuild operation 

in RAID 6 may take longer than that in RAID 5 due to the presence of two parity sets.

 

➢ There are three RAID techniques

1. striping

2. mirroring

3. parity

 

Striping -

 

➢ Striping is a technique to spread data across multiple drives (more than one) to use the drives 

in parallel.

➢ All the read-write heads work simultaneously, allowing more data to be processed in a shorter 

time and increasing performance, compared to reading and writing from a single disk.

➢ Within each disk in a RAID set, a predefined number of contiguously addressable disk 

blocks are defined as a strip.

➢ The set of aligned strips that spans across all the disks within the RAID set is called a stripe.

➢ Fig shows physical and logical representations of a striped RAID set.

 

➢ Strip size (also called stripe depth) describes the number of blocks in a strip and is the 

maximum amount of data that can be written to or read from a single disk in the set.

➢ All strips in a stripe have the same number of blocks.

✓ Having a smaller strip size means that data is broken into smaller pieces while spread 

across the disks.

➢ Stripe size is a multiple of strip size by the number of data disks in the RAID set.

✓ Eg: In a 5 disk striped RAID set with a strip size of 64 KB, the stripe size is 320KB 

(64KB x 5).

➢ Stripe width refers to the number of data strips in a stripe.

➢ Striped RAID does not provide any data protection unless parity or mirroring is used.

 

2 Mirroring

➢ Mirroring is a technique whereby the same data is stored on two different disk drives, 

yielding two copies of the data.

➢ If one disk drive failure occurs, the data is intact on the surviving disk drive (see Fig 1.12) 

and the controller continues to service the host’s data requests from the surviving disk of a 

mirrored pair.

➢ When the failed disk is replaced with a new disk, the controller copies the data from the 

surviving disk of the mirrored pair.

➢ This activity is transparent to the host.

➢ Advantages:

✓ complete data redundancy,

✓ mirroring enables fast recovery from disk failure.

✓ data protection

➢ Mirroring is not a substitute for data backup. Mirroring constantly captures changes in the 

data, whereas a backup captures point-in-time images of the data.

➢ Disadvantages:

✓ Mirroring involves duplication of data — the amount of storage capacity needed is twice the amount of data being stored.

✓ Expensive

Parity

➢ Parity is a method to protect striped data from disk drive failure without the cost of 

mirroring.

➢ An additional disk drive is added to hold parity, a mathematical construct that allows recreation of the missing data.

➢ Parity is a redundancy technique that ensures protection of data without maintaining a full 

set of duplicate data.

➢ Calculation of parity is a function of the RAID controller.

➢ Parity information can be stored on separate, dedicated disk drives or distributed across all the 

drives in a RAID set.

➢ Fig shows a parity RAID set.

➢ The first four disks, labeled “Data Disks,” contain the data. The fifth disk, labeled “Parity 

Disk,” stores the parity information, which, in this case, is the sum of the elements in each 

row.

➢ Now, if one of the data disks fails, the missing value can be calculated by subtracting the sum 

of the rest of the elements from the parity value.

➢ Here, computation of parity is represented as an arithmetic sum of the data. However, parity 

calculation is a bitwise XOR operation

 

 

 

Redundant Arrays of Inexpensive Disks (RAID)

 

➢ RAID is the use of small-capacity, inexpensive disk drives as an alternative to largecapacity drives common on mainframe computers.

➢ Later RAID has been redefined to refer to independent disks to reflect advances in the storage technology.

 

RAID Implementation Methods

➢ The two methods of RAID implementation are:

1. Hardware RAID.

2. Software RAID.

 

Hardware RAID -

 

➢ In hardware RAID implementations, a specialized hardware controller is implemented either 

on the host or on the array.

➢ Controller card RAID is a host-based hardware RAID implementation in which a 

specialized RAID controller is installed in the host, and disk drives are connected to it.

➢ Manufacturers also integrate RAID controllers on motherboards.

➢ A host-based RAID controller is not an efficient solution in a data center environment with a 

large number of hosts.

➢ The external RAID controller is an array based hardware RAID.

➢ It acts as an interface between host and disks.

➢ It presents storage volumes to the host, and the host manages these volumes as physical 

drives.

➢ The key functions of the RAID controllers are as follows:

✓ Management and control of disk aggregations

✓ Translation of I/O requests between logical disks and physical disks

✓ Data regeneration in the event of disk failures

 

Software RAID -

 

➢ Software RAID uses host-based software to provide RAID functions.

➢ It is implemented at the operating-system level and does not use a dedicated hardware 

controller to manage the RAID array.

➢ Advantages when compared to Hardware RAID:

✓ cost

✓ simplicity benefits

➢ Limitations:

✓ Performance: Software RAID affects overall system performance. This is due to 

additional CPU cycles required to perform RAID calculations.

✓ Supported features: Software RAID does not support all RAID levels.

✓ Operating system compatibility: Software RAID is tied to the host operating system; 

hence, upgrades to software RAID or to the operating system should be validated for 

compatibility. This leads to inflexibility in the data-processing environment.

 

Components of Storage Area Network (SAN) involves 3 basic components:

 

(a). Server 

(b). Network Infrastructure

(c). Storage

The above elements are classified into following elements like,

 

(1). Node port

(2). Cables

(3). Interconnection Devices

(4). Storage Array, and

(5). SAN Management Software 

These are explained as following below.

 

1. Node port:

In fiber channel, devices like,

 

Host

Storage

Tape Libraries are referred as nodes

Nodes consists of ports for transmission between other nodes. Ports operate in Full-duplex data transmission mode with transmit(Tx) and Receive(Rx) link.

 

2. Cables:

SAN implements optical fiber cabling. Copper cables are used for short distance connectivity and optical cables for long distance connection establishment.

There are 2 types of optical cables: Multi-mode fiber and Single-mode fiber are as given below.

 

Multi-mode fiber:

Also called MMF, as it carries multiple rays of light projected at different angles simultaneously onto the core of the cable. In MMF transmission, light beam travelling inside the cable tend to disperse and collide. This collision, weakens the signal strength after it travels certain distance, and it is called modal dispersion.

MMF cables are used for distance up-to 500 meters because of signal degradation(attenuation) due to modal dispersion.

 

Single-mode fiber:

Also called SMF, as it carries a single beam of light through the core of the fiber. Small core in the cable reduces modal dispersion. SMF cables are used for distance up-to 10 kilometers due to less attenuation. SMF is costlier than MMF.

Other than these cables, Standard Connectors (SC) and Lucent Connectors (LC) are commonly used fiber cables with data transmission speed up-to 1 Gbps and 4 Gbps respectively. Small Form-factor Pluggable (SFP) is an optical transceiver used in optical communication with transmission speed up-to 10 Gbps.

 

3. Interconnection Devices:

The commonly used interconnection devices in SAN are:

 

Hubs

Switches and

Directors

Hubs are communication devices used in fiber cable implementations. They connect nodes in loop or star topology.

Switches are more intelligent than hubs. They directly route data from one port to other. They are cheap and their performance is better than hubs.

Directors are larger than switches, used for data center implementations. Directors have high fault tolerance and high port count than switches.

 

4. Storage Array:

 

A disk array also called a storage array, is a data storage system used for block-based storage, file-based storage, or object storage. The term is used to describe dedicated storage hardware that contains spinning hard disk drives (HDDs) or solid-state drives (SSDs).

 

The fundamental purpose of a SAN is to provide host access to storage resources. SAN storage implementations provides:

 

high availability and redundancy,

improved performance,

business continuity and

multiple host connectivity.

5. SAN Management Software:

This software manages the interface between the host, interconnection devices, and storage arrays. It includes key management functions like mapping of storage devices, switches, and logical partitioning of SAN, called zoning. It also manages the important components of SAN like storage devices and interconnection devices.

 

File System

➢ A file is a collection of related records or data stored as a unit with a name.

➢ A file system is a hierarchical structure of files.

➢ A file system enables easy access to data files residing within a disk drive, a disk partition, or 

a logical volume.

➢ It provides users with the functionality to create, modify, delete, and access files.

➢ Access to files on the disks is controlled by the permissions assigned to the file by the owner, 

which are also maintained by the file system.

➢ A file system organizes data in a structured hierarchical manner via the use of directories, 

which are containers for storing pointers to multiple files.

➢ All file systems maintain a pointer map to the directories, subdirectories, and files that are 

part of the file system.

➢ Examples of common file system are:

 

✓ NT File System (NTFS) for Microsoft Windows

✓ UNIX File System (UFS) for UNIX

✓ Extended File System (EXT2/3) for Linux

 

➢ The file system also includes a number of other related records, which are collectively called 

the metadata.

➢ For example, the metadata in a UNIX environment consists of the superblock, the inodes, 

and the list of data blocks free and in use.

➢ A superblock contains important information about the file system, such as the file system 

type, creation and modification dates, size, and layout.

➢ An inode is associated with every file and directory and contains information such as the file 

length, ownership, access privileges, time of last access/modification, number of links, and 

the address of the data.

➢ A file system block is the smallest “unit” allocated for storing data.

 

➢ The following list shows the process of mapping user files to the disk storage subsystem with 

an LVM (see Fig 1.8)

1. Files are created and managed by users and applications.

2. These files reside in the file systems.

3. The file systems are mapped to file system blocks.

4. The file system blocks are mapped to logical extents of a logical volume.

5. These logical extents in turn are mapped to the disk physical extents either by the 

operating system or by the LVM.

6. These physical extents are mapped to the disk sectors in a storage subsystem.

If there is no LVM, then there are no logical extents. Without LVM, file system blocks are 

directly mapped to disk sectors.

➢ The file system tree starts with the root directory. The root directory has a number of 

subdirectories.

➢ A file system can be either :

✓ a journaling file system

✓ a nonjournaling file system.

 

 

➢ Intelligent Storage Systems are feature-rich RAID arrays that provide highly optimized 

I/O processing capabilities.

➢ These storage systems are configured with a large amount of memory (called cache) and 

multiple I/O paths and use sophisticated algorithms to meet the requirements of 

performance-sensitive applications.

➢ An intelligent storage system consists of four key components (Refer Fig 1.21):

Fig 1.21 Components of an Intelligent Storage System

1.14.1 Front End

➢ The front end provides the interface between the storage system and the host.

➢ It consists of two components:

i. Front-End Ports

ii. Front-End Controllers.

✓ Front End

✓ Cache

✓ Back end

✓ Physical disks

 

Front End

➢ The front end provides the interface between the storage system and the host.

➢ It consists of two components:

i. Front-End Ports

ii. Front-End Controllers.

➢ A front end has redundant controllers for high availability, and each controller contains 

multiple front-end ports that enable large numbers of hosts to connect to the intelligent 

storage system.

➢ Each front-end controller has processing logic that executes the appropriate transport 

protocol, such as Fibre Channel, iSCSI, FICON, or FCoE for storage connections.

➢ Front-end controllers route data to and from cache via the internal data bus.

➢ When the cache receives the write data, the controller sends an acknowledgment message 

back to the host.

 

Cache

➢ Cache is semiconductor memory where data is placed temporarily to reduce the time 

required to service I/O requests from the host.

➢ Cache improves storage system performance by isolating hosts from the mechanical 

delays associated with rotating disks or hard disk drives (HDD).

➢ Rotating disks are the slowest component of an intelligent storage system. Data access on 

rotating disks usually takes several millisecond because of seek time and rotational latency.

➢ Accessing data from cache is fast and typically takes less than a millisecond.

➢ On intelligent arrays, write data is first placed in cache and then written to disk.

 

Back End

➢ The back end provides an interface between cache and the physical disks.

➢ It consists of two components:

i. Back-end ports

ii. Back-end controllers.

➢ The back end controls data transfers between cache and the physical disks.

➢ From cache, data is sent to the back end and then routed to the destination disk.

➢ Physical disks are connected to ports on the back end.

➢ The back end controller communicates with the disks when performing reads and writes 

and also provides additional, but limited, temporary data storage.

➢ The algorithms implemented on back-end controllers provide error detection and 

correction, and also RAID functionality.

➢ For high data protection and high availability, storage systems are configured with dual 

controllers with multiple ports.

 

Physical Disk

➢ A physical disk stores data persistently.

➢ Physical disks are connected to the back-end storage controller and provide persistent data 

storage.

➢ Modern intelligent storage systems provide support to a variety of disk drives with 

different speeds and types, such as FC, SATA, SAS, and flash drives.

➢ They also support the use of a mix of flash, FC, or SATA within the same array.

 

 

 

 ➢ A protocol enables communication between the host and storage.
➢ Protocols are implemented using interface devices (or controllers) at both source and destination.
➢ The popular interface protocols used for host to storage communications are: i. Integrated Device Electronics/Advanced Technology Attachment (IDE/ATA) ii. Small Computer System Interface (SCSI), iii. Fibre Channel (FC) iv. Internet Protocol (IP)

IDE/ATA and Serial ATA:
➢ IDE/ATA is a popular interface protocol standard used for connecting storage devices, such as disk drives and CD-ROM drives.
➢ This protocol supports parallel transmission and therefore is also known as Parallel ATA (PATA) or simply ATA.
➢ IDE/ATA has a variety of standards and names.
➢ The Ultra DMA/133 version of ATA supports a throughput of 133 MB per second.
➢ In a master-slave configuration, an ATA interface supports two storage devices per connector.
➢ If performance of the drive is important, sharing a port between two devices is not recommended.
➢ The serial version of this protocol is known as Serial ATA (SATA) and supports single bit serial transmission.
➢ High performance and low cost SATA has replaced PATA in newer systems.
➢ SATA revision 3.0 provides a data transfer rate up to 6 Gb/s.

SCSI and Serial SCSI:
➢ SCSI has emerged as a preferred connectivity protocol in high-end computers.
➢ This protocol supports parallel transmission and offers improved performance, scalability, and compatibility compared to ATA.
➢ The high cost associated with SCSI limits its popularity among home or personal desktop users.
➢ SCSI supports up to 16 devices on a single bus and provides data transfer rates up to 640 MB/s.
➢ Serial attached SCSI (SAS) is a point-to-point serial protocol that provides an alternative to parallel SCSI.
➢ A new version of serial SCSI (SAS 2.0) supports a data transfer rate upto 6Gb/s.

Fibre Channel (FC):
➢ Fibre Channel is a widely used protocol for high-speed communication to the storage device.
➢ Fibre Channel interface provides gigabit network speed.
➢ It provides a serial data transmission that operates over copper wire and optical fiber.
➢ The latest version of the FC interface (16FC) allows transmission of data up to 16 Gb/s. Internet Protocol (IP):
➢ IP is a network protocol that has been traditionally used for host-to-host traffic.
➢ With the emergence of new technologies, an IP network has become a viable option for hostto-storage communication.
➢ IP offers several advantages: ✓ cost ✓ maturity ✓ enables organizations to leverage their existing IP-based network.
➢ iSCSI and FCIP protocols are common examples that leverage IP for host-to-storage communication.
 Key characteristics of data center elements are: 1) Availability: All data center elements should be designed to ensure accessibility. The inability of users to access data can have a significant negative impact on a business.
2) Security: Polices, procedures, and proper integration of the data center core elements that will prevent unauthorized access to information must be established. Specific mechanisms must enable servers to access only their allocated resources on storage arrays.
3) Scalability: Data center operations should be able to allocate additional processing capabilities (eg: servers, new applications, and additional databases) or storage on demand, without interrupting business operations. The storage solution should be able to grow with the business.
4) Performance: All the core elements of the data center should be able to provide optimal performance and service all processing requests at high speed. The infrastructure should be able to support performance requirements.
5) Data integrity: Data integrity refers to mechanisms such as error correction codes or parity bits which ensure that data is written to disk exactly as it was received. Any variation in data during its retrieval implies corruption, which may affect the operations of the organization.
6) Capacity: Data center operations require adequate resources to store and process large amounts of data efficiently. When capacity requirements increase, the data center must be able to provide additional capacity without interrupting availability, or, at the very least, with minimal disruption. Capacity may be managed by reallocation of existing resources, rather than by adding new resources.
7) Manageability: A data center should perform all operations and activities in the most efficient manner. Manageability can be achieved through automation and the reduction of human (manual) intervention in common tasks.
 ➢ Historically, organizations had centralized computers (mainframe) and information storage devices (tape reels and disk packs) in their data center.
➢ The evolution of open systems and the affordability and ease of deployment that they offer made it possible for business units/departments to have their own servers and storage.
➢ In earlier implementations of open systems, the storage was typically internal to the server. This approach is referred to as server-centric storage architecture (see Fig 1.4 [a]).
➢ In this server-centric storage architecture, each server has a limited number of storage devices, and any administrative tasks, such as maintenance of the server or increasing storage capacity, might result in unavailability of information.
➢ The rapid increase in the number of departmental servers in an enterprise resulted in unprotected, unmanaged, fragmented islands of information and increased capital and operating expenses.
➢ To overcome these challenges, storage evolved from server-centric to information-centric architecture.
➢ In information-centric architecture, storage devices are managed centrally and independent of servers.
➢ These centrally-managed storage devices are shared with multiple servers.
➢ When a new server is deployed in the environment, storage is assigned from the same shared storage devices to that server.
➢ The capacity of shared storage can be increased dynamically by adding more storage devices without impacting information availability.
➢ In this architecture, information management is easier and cost-effective.
➢ Storage technology and architecture continues to evolve, which enables organizations to consolidate, protect, optimize, and leverage their data to achieve the highest return on information assets.

Jump to Page : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53

Recommended Question Bank

General - Computer Science
View
- Computer organisation and architecture
View
NA - Java
View
- javascript
View
- STORAGE AREA NETWORKS
View
Mejona - Mejona Technology
View
VTU - NCES(Non - Conventional Energy Resources
View
- Java Basic Codes
View
VTU - STORAGE AREA NETWORK
View
- HIGHWAY ENGINEERING
View
- COMPUTER ORGANIZATION
View
- Quantity Surveying and Contracts Management
View
- Design of RC Structural Elements
View
- Ground Water and Hydraulic
View
- Urban Transport Planning
View
- Basic Geotechnical Engineering
View
VTU - Municipal Waste Water Engineering
View
VTU - Design of Steel Structures Elements
View
- Interview Question Bank
View
VTU - Artificial Intelligence
View
Visvesvaraya Technological University (VTU) - Ground water hydraulic
View
-
View
VTU - Artificial intelligence and Machine Learning (AIML)
View
VTU - Energy and Environment
View
VTU - User Interface Design
View
- Data Structures and Algorithms
View
VTU - Big Data Analytics
View
VTU - Engineering Chemistry
View
VTU - User Interface Design (UID)
View
Entrance Exam for job - Entrance Exam Questions
View
VTU - Elements of Civil Engineering and Mechanic
View
VTU - Computer Graphics and Visualization
View
VTU - Object Oriented Concepts
View
VTU - System Software and Compiler Design
View
VTU - Web Technology and its Applications
View
VTU - Cloud Computing and Its Applications
View