set custom lcd panel text on poweredge r720 for sale

I recently bought a pair of these servers to take over VMware duties from a pair of HP ProLiant DL380 G5 servers. Having had a few bad Dell experiences years ago I had stopped buying PowerEdge machines as I considered their design to be inferior (think PE1850) but I’m pleasantly surprised by these R710 machines.

In the server’s own BIOS options there is a Custom LCD field but entering text here and restarting doesn’t change the panel – it still just shows the Service Tag. Strangely, the iDRAC BIOS doesn’t offer you any control here at all, it just lists what the custom string currently is.

To make matters worse, I had accidentally got the desired result on one of the servers, but couldn’t get the second one configured. The answer lies with the buttons next to the LCD. Though you can view IP settings, temperature, power usage, etc., there is also a Setup option. With 48GB of RAM, each POST of the machine takes about 5 minutes so I had been too cautious to mess about with these options in case I undid some of my initial iDRAC config. I assumed that they would only provide a subset of the BIOS options. Wrong! You needto use the panel – even the iDRAC WebUI doesn’t seem to configure the LCD screen.

set custom lcd panel text on poweredge r720 for sale

You can change it in BIOS Setup F2 by going to Embedded Server Management and setting Front-Panel LCD Options to User-Defined String, then goin gto User-Defined LCD String to set the string.

You can also change it from the OS using OMSA (OpenManage Server Administrator by going to System, Main System Chassis, Front Panel (sub tab), and set LCD Line to Custom.

set custom lcd panel text on poweredge r720 for sale

I haven"t found a complete reference of Dell"s proprietary IPMI commands, but according to the documentation I found here, the first invocation of ipmitool puts the supplied string into one of the display"s registers, and the second one flips the display buffer to actually show this.

set custom lcd panel text on poweredge r720 for sale

The Dell PowerEdge R720 12th Generation is a 2-socket, 2U server that features the Intel Xeon E5-2600 processor family and supports up to 768GB of DDR3 memory. Dell offers the R720 in various backplane configurations with up to 16 2.5-inch internal hard drives or 8 3.5-inch drives. A new optional feature though, designed to take the performance compute server market by storm, are 4 hot-plug front-access 2.5-inch Express Flash PCIe SSDs geared for high throughput and incredibly low latency. The PowerEdge R720’s Express Flash connectivity makes it unique among servers of its class, and is one of the reasons we have added two R720 units to the lab.

It’s not just storage technology itself that is a moving target – other critical components of enterprise storage infrastructure like interconnects and compute platforms are also continually evolving. Compute servers are not our primary focus, but performance and scalability differences between similarly-spec’d servers from different manufacturers or different generations from the same manufacturer can have important consequences for storage performance. It is also important to understand how factors like chassis construction and layout will affect long-term routine maintenance of a server.

StorageReview’s two new PowerEdge R720 servers feature Xeon E5-2640 2.50GHz processors with 15M cache and 7 PCIe slots to power real-world enterprise testing environments as well as see how storage devices perform when used in conjunction with compute servers from various manufacturers. One of our R720 servers is configured with 8 2.5-inch SFF internal drive bays and 4 front-accessible Express Flash bays. The other R720 features 16 2.5-inch SFF internal drive bays. Our review will focus on the PowerEdge R720 with Express Flash, noting key differences between the two when appropriate. The PowerEdge R720 is also available with a chassis configured for 8 3.5-inch LFF drives.

Availability: High-efficiency, hot-plug, redundant power supplies; hot-plug drive bays; TPM; dual internal SD support; hot-plug redundant fan; optional bezel; luggage-tag; ECC memory, interactive LCD screen; extended thermal support; ENERGY STAR® compliant, extended power range; switch agnostic partitioning (SWAP)

The most notable option offered by the PowerEdge R720 is a chassis that supports up to four front-access 2.5-inch PCIe Express Flash drives. The PowerEdge R720 uses a x16 PCIe breakout board for Express Flash connectivity, each drive requiring four lanes. Express Flash storage can be configured as cache or as a primary storage, offering lower latency and much greater performance than an SSD connected via SAS or SATA. Express Flash drives supplied in our R720 are 2.5-inch Micron P320h models, which use 34nm SLC NAND and are engineered for write-heavy applications. Dell warrants the lifetime of Express Flash drives in terms of bytes written; current 175GB and 350GB models offer 12.5 and 25 petabytes of drive writes, respectively. Dell software management applications can be configured to notify the server administrator when these wear limits are nearing. Our Express Flash R720 shipped with four 350GB drives.

Express Flash PCIe SSDs support orderly insertion, when a drive is added to a running system in a bay where an Express Flash drive has not been previously inserted since booting. Express Flash also supports orderly removal, where the system is notified prior to drive removal, and orderly swap, when a drive can be replaced with prior system notification. PowerEdge R720 servers also support Dell’s CacheCade technology, which provides automated storage tiering on SSDs when using PERC H810 and H710P controllers. The R720 can employ redundant failsafe hypervisors, and can be used as part of Dell’s Virtual Integrated System (VIS) solution.

The PowerEdge R720 supports Dell’s Select Network Adapters daughter cards to house the server’s LOM subsystem without requiring a PCIe slot. Network connectivity options for the R720 include 1000 Base-T, 10Gb Base-T, and 10Gb SFP+ interfaces from Intel and Broadcom. It can be configured to operate with a single 495W, 750W, or 1100W AC power supply module, or can be equipped with a redundant power supply. As a top-tier option, Dell offers an 1100W DC power supply for the PowerEdge R720. In the default redundant configuration, power is supplied equally from both supplies, but may be reconfigured via iDRAC to enable a hot spare feature which switches one power supply to sleep unless needed.

The R720 can support up to four passively-cooled graphics processing units (GPU) to accelerate virtual desktop infrastructure and high performance computing applications. Breaking it down by power capabilities, the R720 can support two 300W, full-length, double-wide internal GPUs or up to four 150W, full-length, single-wide GPUs. Each GPU can support up to 6GB of dedicated GDDR5 memory. Actively-cooled GPU cards are not supported as they interfere with and are not designed for forced-air cooling inside a server. The R720 can also connect to PowerEdge C410x external GPUs through a host interface card (HIC) with an iPass cable. Both the NVIDIA and Dell x16 HICs require the R720s single x16 PCIe slot to support up to four external GPUs.

As with StorageReview’s Gen8 HP ProLiant DL380p, the R720 supports up to 768GB of memory across 24 DIMMs when equipped with dual-processors. Each processor has 4 memory channels, each channel supporting up to 3 DIMMs. In addition to supporting unbuffered DIMMs (UDIMMs) and registered DIMMs (RDIMMs), the R720 supports load reduced DIMMs (LRDIMMs). In our configurations Dell supplied 24 8GB RDIMMs to populate all memory channels inside the R720 providing 192GB of system memory.

12th generation PowerEdge servers are part of Dell’s OpenManage platform, built around the Integrated Dell Remote Access Controller 7 (iDRAC7) with Lifecycle Controller, which provides agent-based and agent-free management. To integrate with agent-based solutions, Dell provides the OpenManage Server Administration (OMSA) which provides one-to-one systems management with a CLI interface or Web-based GUI. iDRAC7 can also provide remote access to the system whether or not there is an operating system installed. Through the iDRAC7 interface, users can quickly learn vital system information in many categories including storage, thermals, power and others. When first logged in to, iDRAC7 presents the user with health stats in all categories, as well as a preview of the iKVM.

Drilling down into the storage category, users can view information on the current disk configuration, as well as individual drive stats. Shown below is the RAID10 disk array we configured utilizing eight 300GB Seagate Cheetah 15K.3 enterprise hard drives connected through the Dell PERC H710p on-board RAID controller. Through this screen users would be able to quickly find out information about a drive failure remotely as well as narrow it down to which slot has the defective drive.

For remote system access where Remote Desktop or SSH might not be feasible, iDRAC7 includes a virtual console that gives users access to the system through a standard web browser with JAVA. This window also has useful features for remotely triggering power controls to restart a frozen system or even turn on a system without local access. Users can also mount local media to be accessible by the remote system, or map ISOs to quickly provision systems over the network.

The PowerEdge R720 debuts a new PowerEdge chassis design intended to support greater scalability compared to 11th generation PowerEdge servers, with an increased number of DIMMs, PCIe slots, and hard drives. Also unique to the 12th generation R720 are the four ExpressFlash slots, which currently support 175GB and 350GB Micron RealSSD P320h PCIe SSDs. When we reviewed the HHHL Micron P320h last year, we found it to offer class-leading performance.

The front of the server features a power button and indicator, recessed non-maskable interrupt (NMI) button to troubleshoot software and device driver errors, two USB connectors, Dell’s vFlash SD media reader (activated with iDRAC7), video connector, and a simple LCD control panel interface for local management. The vFlash media SD card reader is used for configuration, scripts, imaging, and other local management tasks. Dell’s 12th generation PowerEdge servers feature a model-specific QR code that links to video overviews of system internals and externals, task-oriented videos and installation wizards, reference materials, LCD diagnostics, and an electrical overview. This code appears several places on the chassis.

The rear of the unit offers access to up to two hot-plug power supplies and associated indicators, network connectivity via Dell’s Select Network Adapter family, two USB ports, iDRAC7 Enterprise port, video connector, and serial connector. Also visible are the available PCIe slots, rear carrying handle, as well as the two redundant 1,100 watt power supplies.

The R720 supports ReadyRails II sliding rails for tool-less mounting in 4-post racks with square or unthreaded round holes, or tooled mounting in 4-post threaded hole racks. The R720 is also compatible with ReadyRails static rails for tool-less mounting in 4-post racks with square or unthreaded round holes or tooled mounting in 4-post threaded as well as 2-post Telco racks.

For improved cable management, Dell offers a toolless cable management arm compatible with the PowerEdge R720. In our lab evaluation, the sliding ReadyRails II were quick to clip into position in our Eaton S-Series Rack, and offered a secure fit with minimal slack.

To install the R720 in the rails, users hold the server by the front and the rear carrying handle and carefully lower it into the extended rails while aligning the mounting pins with their appropriate slots. We found this process to be very intuitive and easy to nail on the first try, which quickly sped up the time required to get the server into production status.

The PowerEdge R720 supports hot-swappable cooling fans in an N+1 configuration, allowing a technician to replace any one fan at a time. Supporting newer environmental conditions, the R720 also incorporates Dell’s Fresh Air cooling design, allowing the server to operate above 35°C/95°F to reduce power consumption and related cooling expenses. This is also beneficial should a user want to deploy the R720 outside of traditional datacenter environments, where temperatures may be more variable.

When it comes to servicing components, the R720 can be operated temporarily without one of its cooling fans, allowing hot-swap replacement. The design of the cooling fan assembly makes it straightforward to remove and replace individual fans or the entire assembly. When replacing an individual fan, you grip the fan’s release button and lift the fan out of position, which also releases the power connection in one easy step. For more expansive repairs requiring the removal of the entire cooling assembly, users can lift latches on both sides of the server and lift out the entire unit in one piece.

When it comes to managing airflow, Dell allows the user to select the appropriate cooling mode for that specific environment. These user-selectable modes are invaluable to servers with additional equipment, such as PCIe Application Accelerators, which can overheat when certain automatic cooling modes. This happens because the server incorrectly throttles fan speeds based on chassis temperatures while local temps of the AA are still high. In these cases, being able to modify the cooling parameters to run faster than normal can allow better performance and increase reliability.

Utilizing our high-I/O FIO synthetic benchmarks we stressed the four Express Flash PCIe SSDs with a chassis inlet temperature of 27C. We found PCIe SSD temperatures dropped from 62C to 49C by switching the cooling profile from Auto to Maximum performance, and enabling the High Fan Speed Offset. To put it another way, without having that adjustment the PCIe SSDs would have been operating 26.5% hotter, which might affect long-term reliability. The downside is this changes the R720’s acoustic profile (increased fan noise) but given their production environments, datacenter noise levels for high-performance servers aren’t greatly impacted. In the tier-one server market right now, HP allows users to customize the fan speeds through the BIOS in the ProLiant DL380p Gen8 although Lenovo with their ThinkServer RD630 does not.

Dell goes to great lengths to optimize the new 12th generation PowerEdge R720 for power efficiency. For the R720, Dell offers four AC 100-240v PSUs, ranging from 495W up to 1,100W. By gearing the known load to a given power supply, users can achieve up to 96% efficiency with some models, which helps to lower overall power and thermal demands inside a datacenter. In this same category, HP offers power supplies ranging from 460 to 1,200 watts with their most efficient models rated at 94% for the DL380p Gen8, while Lenovo offers just one 800W 80Plus Gold option for their RD630.

Another way the Dell PowerEdge R720 can reduce power consumption inside a datacenter is by capping the system at a user-defined limit. When this limit is reached, the processors are throttled to lower system power usage until the target is reached. This can be useful when introducing new servers into an environment that is designed around strict power or thermal limits.

After the R720 has been customized for a specific environment by choosing the best PSU to fit the requirements and adjusting the power cap policy to fit the datacenter needs, Dell offers excellent monitoring tools for tracking power usage through iDRAC7.

When it comes to describing the performance advantage of front-mounted hot-swappable PCIe storage, the paradigm shift of transitioning from rotating media to 2.5″ SATA or SAS SSDs comes to mind. Random I/O and sequential bandwidth is on a much higher level, which would require many of the industry’s fastest SAS SSDs in RAID to match the performance of one Express Flash PCIe SSD… let alone four of them.

We’ve included a quick performance comparison of four Express Flash PCIe SSDs up against eight 15k SAS HDDs and one Smart Optimus Enterprise SAS SSD in our 8k 70/30 synthetic benchmark. We chose the SMART Optimus for this comparison, since at the time of this review it offered the highest 8k 70/30 performance in the SAS/SATA category. And note, this is a small tease of what’s to come, a detailed storage performance breakdown will take place in a second review highlighting the Express Flash technology.

Breaking it down by the numbers, at peak performance with a load of 16T/16Q per drive/array, the Dell Express Flash solution offered 467,644 IOPS, the SMART Optimus measured 41,586 IOPS, and the eight-drive 15K SAS RAID10 array came in with 4,617 IOPS. To match the performance of four Express Flash SSDs, you’d need 12 of the industry’s fastest SAS SSDs, or more than 800 15K SAS HDDs in RAID. In either of those situations, even if you were able to match the performance by scaling out, you’d lose the benefits of reliability, power consumption, and footprint, since you’d be dramatically increasing the components inside (or outside) the system.

With Dell’s Express Flash layout on the PowerEdge R720, you can still have your cake and eat it too. You don’t have to trade capacity for performance, since you still keep eight SFF bays on the front of the chassis to populate with your favorite SAS or SATA drives. You also gain an edge over other compute server platforms, since four x4 PCIe devices only consume one x16 slot. To exceed the performance of four Express Flash SSDs, you’d need to install three x8 HHHL Micron P320h cards, taking up three out of the seven available slots. That performance-density gives Dell a distinct advantage over the HP ProLiant DL380p Gen8 (6 PCIe 3.0 slots) or Lenovo ThinkServer RD630 (5 PCIe 3.0 slots), which would have to scale out Application Accelerators to match a four-drive Express Flash configuration, taking up valuable PCI-Express slots where Dell needs just one x16 slot.

The Dell PowerEdge R720 12th Generation marks more than just a progressive step in mainstream 2U server technology. Dell has been the first to embrace front-mounted, hot-swappable PCIe storage technology in the new R720. For enterprise users who want maximum performance with all the serviceability benefits of traditional SFF drives, the new Express Flash design is a savior for so many reasons. As we’ve seen in our cursory performance look, the Micron Express Flash drives simply dominate the best in class 15K and SAS SSD options in the market today, while still providing a total capacity of up to 1.4TB in the four bays. Should additional storage be needed, users can deploy SFF hard drives in capacities up to 1.2TB now that provide a great backstop to the flash drives in caching use cases and anywhere else where a platter tier makes sense. And because the Express Flash drives in aggregate only take up a single PCIe slot, there’s still plenty of expandability in the 6 available risers for additional PCIe storage if needed. As noted, we’ll dive more into storage performance within the R720 specifically in subsequent content.

While we certainly appreciate the storage aspects of the R720, there are a ton of other reasons to be excited about the platform as well including management, hardware design and thermal controls. The R720 provides an intuitive package dubbed iDRAC7 for remotely monitoring and managing the server, while providing a landing page with every health stat readily available. Turning to hardware design, the R720 packs plenty of mounting and serviceability options, where almost all frequently accessed components are easy to swap out if servicing is required. For cooling and power needs, Dell offers a wide range of PSU options to tailor the system for the best efficiency. Dell then takes things a step further by allowing users to adjust the cooling profiles for high-end devices like PCIe storage that require higher airflow requirements than the automatic mode can provide. Overall buyers can effectively use the Dell PowerEdge R720 as a blank slate, customizing it exactly for their needs, versus trying to shoehorn in an option-fixed model that might not be best in all situations.

As we compare the Dell PowerEdge R720 to other 2U servers on the market that we have reviewed previously from HP and Lenovo, one point is very clear; the R720 currently offers the fastest storage platform on the market in a 2U form factor. While you could try to match it by scaling out with multiple Application Accelerators in other server platforms, you’d lose potentially valuable PCIe real-estate. Dell’s thoughtful design is evident throughout, and the Express Flash components are even upgradable as newer iterations of that technology come out, like NVM Express.

The Dell PowerEdge R720 12th Generation server is not only well-designed with loads of great management features, it’s also the best performing server on the market in this class. Sure, it’s great as a garden variety standard compute server, but with Express Flash technology the R720 really shines, easily lapping all others. Dell has put the definitive stake in the ground by adopting new technology, giving their users a best of breed solution.

set custom lcd panel text on poweredge r720 for sale

Servers are generally managed headless (remotely), but only after they"ve first been set up. The normal practice is to connect them to a mouse, keyboard and monitor (possibly via kvm) while the initial setup and installation is performed, and then remove those items when the server is put into production.

So, if you have an existing desktop machine from which you can (temporily) disconnect your monitor, that"s what you"ll need to do. If you don"t have a monitor you can borrow, most recent TVs will support PC connections with the right cable (which will be much cheaper than buying a whole new display).

Another option is an IP KVM. This is a device that connects to your server and presents a keyboard, mouse, and display to it, but rather than connecting to a real keyboard, mouse, and display, set"s up a service that you can connect to over the network to control the server. Unfortunately, the price is likely higher than just buying a display, keyboard, and mouse.

set custom lcd panel text on poweredge r720 for sale

If your Dell EMC PowerEdge system is equipped with an LCD panel, it can be used to configure or view the iDRAC IP address of the system. The LCD panel is available only on an optional front bezel.

From the Setup menu, use the iDRAC option to select DHCP or Static IP to configure the network mode. If Static IP is selected, the following fields are available:

set custom lcd panel text on poweredge r720 for sale

On Dell hardware, you have the option of configuring the Forge Appliance LCD, a small readout on the computer’s front panel. Use these steps to configure the LCD display for Forge:

Press Esc > Esc > Esc to exit the iDRAC Settings page and the System Setup Main Menu, then continue with instructions in Section 6.0, Installing Other Components Required by Forge.

set custom lcd panel text on poweredge r720 for sale

We’ve got a client that does big batch jobs every day, loading hundreds of gigabytes of data or more in short bursts. They were frustrated with slow performance on the batch jobs, and after we performed our SQL Critical Care® with ’em, it was really clear that their hardware was the bottleneck. They were using a virtual server backed by an iSCSI SAN, and they were getting bottlenecked on reads and writes. We could put some more memory in it to cache more data, preventing the read problem, but we would still get bottlenecked trying to write lots of data quickly to the shared storage.

We recommended two things: first, switch to a standalone bare metal SQL Server (instead of a virtual machine), and second, switch to cheap commodity-grade local solid state storage. Both of those suggestions were a little controversial at the client, but the results were amazing.

Theoretically, virtualization makes for easier high availability and disaster recovery. In practice, there are some situations – like this one – where it doesn’t make sense.

In the event of a failure, 15-30 minutes of downtime were acceptable. The server was important, but not mission-critical. In the event of an outage, they didn’t mind manually failing over to a secondary server. This meant we could avoid the complexity of a failover cluster and shared storage.

Slow performance was not acceptable during normal production. They wanted to put the pedal to the metal and make an order-of-magnitude improvement in their processing speeds with as few code changes as possible.

The Dell R720 is a 2-processor, 2-rack-unit server with room for 16 2.5″ drives across the front of the server, and two RAID controllers. It’s got room for up to 768GB of memory. It’s my favorite 2-processor SQL Server box at the moment.

I’m not against shared storage – I love it – but when I’m dealing with large batch jobs, a limited budget, and no clustering requirement, it’s tough to beat local SSDs. The R720 lets us use a big stack of 2.5″ solid state drives with two RAID controllers for processing data. Quantity is important here since affordable SSDs tend to be relatively small – 1TB or less. Some larger drives exist, like the Samsung 860 EVO 4TB, but bang-for-the-buck doesn’t quite match the 1TB-class yet.

The Dell R720XD is a similar server, but it’s absolutely slathered with drive bays, handling up to 26 2.5″ drives. While that sounds better – especially with today’s fastest SSD drives still being a little size-constrained – the R720XD only has one RAID controller instead of the R720’s two.

For our Plan B – where we’d fail over if the primary server died – we actually stuck with a virtual server. We built a small 2-vCPU, 8GB RAM guest with SQL Server. We keep it current using the database backups from the primary server. Remember, this application is batch-oriented, so we just need to run backups once a day after the batch completes, and then restore them on the secondary server. When disaster strikes, they can shut down the VMware guest, add more CPU and memory power to it, and it’s off and running as the new primary while they troubleshoot the physical box. It’s not as speedy as the primary physical box, but that’s a business decision – if they want full speed, they can easily add a second physical box later.

A 400GB MLC drive is $1,200 rack, so filling all 16 bays would cost $19,200. To put things in perspective, the server itself is about $10k with 2 blazing fast quad-core CPUs, 384GB of memory, and spinners on the fans, so buying Dell’s SSDs triples the cost of the server.

And it’s usually half the cost of the Dell drive, meaning we could fill the R720 with 8TB of smokin’ fast storage for a few thousand bucks, plus leave a couple of hot spares on the shelf.

There are risks with this approach – Dell won’t guarantee that their controller and their software will work correctly with this combination. For example, during our load testing, the DSM SA Data Manager service repeatedly stopped, and we couldn’t always use the Dell OpenManage GUI to build RAID arrays.

3. Ignore the drive bays, and use PCI Express cards. Drives from Intel and Plextor bypass the RAID controller altogether and can deliver even faster performance – but at the cost of higher prices, smaller space, and tougher management. You can’t take four of these drives and RAID 10 them together for more space, for example. (Although that’s starting to change with Windows 2012’s Storage Spaces, and I’m starting to see that deployed in the wild.)

The R720 has two separate RAID controllers, each of which can see 8 of the Samsung drives. The drawback of this server design is that you can’t make one big 16-drive RAID 10 array. That’s totally okay, though, because even just a couple of these race car drives can actually saturate one RAID controller.

How few drives can we get away with? For future client projects, if we didn’t need to fill up the drive bays in order to get capacity, could we saturate the controllers with just, say, 4 drives instead of 8? Can we leave enough space to have hot spare drives? I run the performance tests with 2, 4, 6, and 8 SSD drives.

How much of a performance penalty do we pay for RAID 5? RAID 10 splits your drive capacity in half by storing two copies of everything. RAID 5 lets you store more data – especially important on limited-capacity solid state drives – but is notoriously slow on writes. (Thus, the Battle Against Any Raid Five.) But what if the drives are so fast that the controller is the bottleneck anyway?

Should we turn the controller caching on or off? RAID controllers have a very limited amount of memory (in our case, 1GB) that can be used to cache reads, writes, or both. In the past, I’ve seen SSD-equipped servers actually perform slower with the caching enabled because the caching logic wasn’t fast enough to keep up with the SSDs. Dell’s recent PowerEdge controllers are supposed to be able to keep up with today’s SSDs, but what’s the real story?

Does NTFS allocation unit size still matter? In my SQL Server setup checklist, I note that for most storage subsystems, drives should be formatted with 64K NTFS allocation units for maximum performance. Unfortunately, often we get called into client engagements where the drives are already formatted and the database server is live in production – but the NTFS allocation unit is just 4K, the default. To fix that, you have to reformat the drives – but how much of a difference will it make, and is it worth the downtime?

The answers to these questions change fast, and I need to check again about once a quarter. When I need to double-check again, and I’m working with a client on a new server build with all SSDs, I offer them a big discount if I can get remote access to the server for a couple of days.

Turning off read caching didn’t affect performance. The controller’s small cache (1GB) just isn’t enough to help SQL Servers, which tend to cache most of their data in memory anyway. When we need to hit disk, especially for long sustained sequential reads, the controller’s minimal cache didn’t help – even with just 4 SSDs involved.

The controller’s write caching, however, did help. Write throughput almost tripled as opposed to having caching disabled. Interestingly, as long as write caching at the controller was enabled, it didn’t matter whether read caching was enabled or not – we saw the same benefit. I would expect higher write throughput if all of the 1GB of cache was available to cache writes, but that doesn’t appear to be the case with the R720’s controllers at least.

NTFS allocation unit size made no difference. This combination of RAID controller and drives is the honey badger of storage – it just don’t care. You can leave the default caching settings AND the default NTFS allocation unit size, and it’s still crazy fast.

In RAID 10, adding drives didn’t improve random performance. We got roughly the same random performance with 4 drives and 8 drives. Sequential read throughput improved about 35% – good, but maybe not worth the financial cost of doubling the number of drives. Sequential writes saw a big boost of about 60%, but keep in mind that sustained sequential writes is a fairly rare case for a database server. It’s unusual that we’re not doing *any* reads, and we’re writing our brains out in only one file.

SSD speeds still can’t beat a RAMdrive. With SQL Server Standard Edition being confined to just 64GB of memory, some folks are choosing to install RAMdrive software to leverage that extra cheap memory left in the server. If your queries are spilling to TempDB because they need memory for sorts & joins, this approach might sound tempting. Microsoft’s even got an old knowledge base article about it. The dark side is that you’re installing another software driver on your system, and I always hate doing that on production systems. Just for giggles, I installed DataRam’s RAMdisk for comparison. The SSDs are on the left, RAMdisk on the right, and pay particular attention to the bottom row of results:

The bottom row, 4K operations with a queue depth of 32, is vaguely similar to heavy activity on multiple TempDB data files. This particular RAMdrive software manages about 4x more write throughput (and IOPs as well) than the RAID 10 array of 8 drives. (For the record, a RAID 0 array of 8 drives doesn’t beat the RAMdrive on random writes either.)

And finally, here’s the performance penalty for RAID 5.RAID 10 is on the left, 5 on the right. Same number of drives, same cache settings, same allocation unit settings:

It’s not a surprise that RAID 5 is faster for reads, but in this round of testing, it was even faster for sequential and large writes. The only place where RAID 5 takes a hit: the bottom right, 4K writes with queue depth 32.

We’re not just testing the drives here – we’re also testing the RAID controller. To get a true picture, we have to run another test. On the left is a RAID 10 array with 8 drives. On the right, just one drive by itself:

The jawdropper hits in the bottom half of the results – when dealing with small random operations, more drives may not be faster. In fact, the more drives you add, the slower writes get, because the controller has to manage a whole lot of writes across a whole bunch of drives.

See, we’re not just testing the drives – we’re testing the RAID controller too. It’s a little computer with its own processor, and it has to be able to keep up with the data we’re throwing at it. In the wrong conditions, when it’s sitting between a fast server and a fast set of solid state drives, this component becomes the bottleneck. This is why Dell recommends that if you’re going to fill the server with SSDs, and you want maximum performance, you need to use the R720 instead of the R720xd. Yes, the R720xd has more slots – but it only has one RAID controller, so you’re going to hit the controller’s performance ceiling fairly quickly. (In fact, the controller can’t even keep up with a single drive when doing random writes.)

This is why, when we’re building a new SQL Server, we want to test the bejeezus out of the drive configurations before we go live. In this particular scenario, for example, we did additional testing to find out whether we’d be better off having multiple different RAID arrays inside the same controller. Would two four-drive RAID 10 arrays striped together be faster than one eight-drive RAID 10 array? Would we be better off with a single RAID 1 for TempDB, and a big RAID 5 for everything else? Should we make four mirrored pairs, and then stripe them together in Windows?

You can’t take anything for granted, and you have to redo this testing frequently as new RAID controllers and new SSD controllers come out. In testing this particular server, for TempDB-style write patterns, a RAID 0 stripe of two drives was actually slower than a single drive by itself!

set custom lcd panel text on poweredge r720 for sale

CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. © 2013 Dell Inc. Trademarks used in this text: Dell , the Dell logo, Dell Boomi , Dell Precision , OptiPlex...

............................34 Boot Manager Screen ............................... 34 UEFI Boot Menu ..........................34 Embedded System Management ..............................34 iDRAC Settings Utility ........................35 Entering The iDRAC Settings Utility ........................35 Changing The Thermal Settings 3 Installing System Components....................37 .............................. 37 Recommended Tools ............................. 37 Front Bezel (Optional) ..........................37 Removing The Front Bezel ..........................38...

.......................... 60 Removing An Expansion Card ........................... 61 Installing An Expansion Card ........................61 Removing Expansion-Card Risers ........................63 Installing Expansion-Card Risers ................................63 SD vFlash Card ...........................63 Replacing An SD vFlash Card ............................64 Internal Dual SD Module ......................64 Removing The Internal Dual SD Module ......................

Troubleshooting Expansion Cards ..........................110 Troubleshooting Processors 5 Using System Diagnostics..................... 111 ............................111 Dell Online Diagnostics ........................111 Dell Embedded System Diagnostics ..................111 When To Use The Embedded System Diagnostics ....................111 Running The Embedded System Diagnostics ........................... 112 System Diagnostic Controls 6 Jumpers And Connectors......................

7 Technical Specifications....................... 117 8 System Messages........................123 ..............................123 LCD Messages ..........................123 Viewing LCD Messages ..........................123 Removing LCD Messages ............................123 System Error Messages ...............................139 Warning Messages ............................139 Diagnostic Messages ..............................139 Alert Messages 9 Getting Help..........................141 ..............................141 Contacting Dell...

Hard drives (8) Up to eight 2.5 inch hot-swappable hard drives. Up to four 2.5 hot-swappable hard drives and up to two 2.5 inch Dell PowerEdge Express Flash devices (PCIe SSDs). Figure 2. Front-Panel Features and Indicators—10 Hard Drive System Item...

Item Indicator, Button, or Icon Description Connector iDRAC7 Enterprise port Dedicated management port. NOTE: The port is available for use only if the iDRAC7 Enterprise license is installed on your system. Serial connector Allows you to connect a serial device to the system. PCIe expansion card slot Allows you to connect a PCIe expansion card.

Item Indicator, Button, or Icon Description Connector 8 Hard Drive When one of these buttons is System pressed, the LCD panel on the front and the system status indicator on the back flashes until one of the buttons is pressed again. Press to toggle the system ID on and off.

• For the full name of an abbreviation or acronym used in this document, see the Glossary at www.dell.com/ support/manuals. NOTE: Always check for updates on www.dell.com/support/manuals and read the updates first because they often...

By default, the option is set to Disabled. Dell Controlled Turbo Helps in controlling Turbo engagement. By default, the option is set to Disabled. This feature is also referred to as Dell Processor Acceleration Technology (DPAT).

Figure 12. Inside the System—8 Hard Drive System 1. control panel assembly 11. storage controller card 2. cable securing clip 12. network daughter card cooling shroud 3. cooling fans (7) 13. DIMMs (24) 4. cable securing bracket 14. heat sink for processor 2 5.

The system contains 24 memory sockets split into two sets of 12 sockets, one set per processor. Each 12-socket set is organized into four channels. In each channel, the release levers of the first socket are marked white, the second socket black, and the third socket green.

System Capacity DIMM Size (in Number of DIMM Rank, Organization, DIMM Slot Population (in GB) DIMMs and Frequency NOTE: 16 GB DIMMs must be installed in slots numbered A1, A2, A3, A4, A5, A6, A7, and A8 and 8 GB DIMMs must be installed in slots A9 and A11.

To release the memory-module blank from the socket, simultaneously press the ejectors on both ends of the memory module socket. CAUTION: Handle each memory module only on the card edges, making sure not to touch the middle of the memory module or metallic contacts. To avoid damaging the memory module, handle only one memory module at a time.

Table 3. Systems Supporting Three PCIe Expansion Cards Riser PCIe Slot Processor Connection Height Length Link Width Slot Width Processor 2 Low Profile Half Length Processor 2 Low Profile Half Length Processor 1 Low Profile Half Length NOTE: Both the processors must be installed to use riser 1 slots. Table 4.

Figure 26. Removing and Installing the Expansion Card Riser 1 1. expansion-card riser 1 4. riser guide back (left) 2. expansion card 5. connector 3. riser guide back (right) Figure 27. Removing and Installing the Expansion Card Riser 3...

Figure 31. Removing and Installing the Heat Sink 1. heat sink 2. retention sockets (2) 3. retention screws (2) 4. processor CAUTION: The processor is held in its socket under strong pressure. Be aware that the release lever can spring up suddenly if not firmly grasped. Position your thumb firmly over the processor socket-release lever near the unlock icon and release the lever from the locked position by pushing down and out from under the tab.

Figure 32. Processor Shield Opening and Closing Lever Sequence 1. close-lock symbol 4. processor socket-release lever 2. processor socket-release lever 5. open-lock symbol 3. processor 10. Rotate the processor shield upward and out of the way. CAUTION: The socket pins are fragile and can be permanently damaged. Be careful not to bend the pins in the socket when removing the processor out of the socket.

DC power and to safety grounds. Do not attempt connecting to DC power or installing grounds yourself. All electrical wiring must comply with applicable local or national codes and practices. Damage due to servicing that is not authorized by Dell is not covered by your warranty. Read and follow all safety instructions that came with the product.

Damage due to servicing that is not authorized by Dell is not covered by your warranty. Read and follow the safety instructions that came with the product.

Figure 42. Cabling Diagram—2.5 Inch (x4) Systems 1. cable retention bracket 4. SAS connector on system board 2. system board 5. SAS backplane 3. integrated storage controller card...

Figure 43. Removing and Installing the 2.5 Inch (x4 SAS Hard-Drive and x2 Dell PowerEdge Express Flash [PCIe SSD]) Backplane 1. backplane signal cable 6. release tabs (2) 2. PCIe A cable 7. SAS B cable 3. backplane signal cable 8.

Figure 44. Cabling Diagram—Systems With the 2.5 Inch (x4 SAS and x2 PCIe SSD) Hard-Drive Backplane 1. cable retention bracket 5. SAS connector on system board 2. system board 6. SAS and PCIe SSD backplane 3. PCIe SSD card 4. integrated SAS controller card Figure 45.

1. backplane signal cable 4. release tabs (2) 2. backplane power cable 5. SAS B cable 3. SAS A cable Figure 46. Cabling Diagram—2.5 Inch (x8) Systems 1. cable retention bracket 4. SAS connector on the system board 2. system board 5.

Figure 47. Cabling Diagram—2.5 Inch (x8) Systems 1. cable retention bracket 2. system board 3. SAS controller card 4. SAS backplane Figure 48. Removing and Installing the 2.5 Inch (x10) Hard-Drive Backplane...

1. SAS backplane 6. SAS cables (2) 2. backplane power cable 7. release tabs (2) 3. SD signal cable 8. hard-drive connector 4. backplane signal cable 5. SD card socket Figure 49. Cabling Diagram—2.5 Inch (x10) Systems 1. cable retention bracket 4.

a. mini SAS cable connector b. metal tab c. connector on the system board Disconnect all other cables from the system board. CAUTION: Take care not to damage the system identification button while removing the system board from the chassis. Grasp the system-board holder, lift the blue release pin, slide the system board toward the front of the system, and lift the system board out of the chassis.

Item Connector Description J_VIDEO_REAR Video connector J_COM1 Serial connector J_IDRAC_RJ45 iDRAC7 connector J_CYC System identification connector CYC_ID System identification button J_RISER_2A Riser 2 connector J_RISER_1A Riser 1 connector TOUCH POINT Touch point for holding system board J_RISER_2B Riser 2 connector J_RISER_1B Riser 1 connector J_STORAGE...

6–hard-drive systems Up to four 2.5 inch, internal, hot-swappable SAS, SATA, or Nearline SAS hard drives and up to two 2.5 inch Dell PowerEdge Express Flash devices (PCIe SSDs) 8–hard-drive systems Up to eight 2.5 inch, internal, hot-swappable SAS, SATA, or Nearline SAS hard drives 10–hard-drive systems...

GPU is not supported. • LRDIMM is not supported. • 130 W (4 core) processor is not supported. • Redundant power supplies are required. • Non Dell qualified peripheral cards and/or peripheral cards greater than 25 W are not supported.

Environmental NOTE: For additional information about environmental measurements for specific system configurations, see dell.com/environmental_datasheets. Temperature Maximum Temperature Gradient (Operating and 20 °C/h (36 °F/h) Storage) Storage Temperature Limits –40 °C to 65 °C (–40 °F to 149 °F) Temperature (Continuous Operation) Temperature Ranges (for altitude less than 950 m or 10 °C to 35 °C (50 °F to 95 °F) with no direct sunlight on...

Environmental NOTE: This section defines the limits to help avoid IT equipment damage and/or failure from particulates and gaseous contamination. If it is determined that levels of particulates or gaseous pollution are beyond the limits specified below and are the reason for the damage and/or failures to your equipment, it may be necessary for you to re-mediate the environmental conditions that are causing the damage and/or failures.

Error Code Message Information AMP0302 name > current is greater than the upper warning Message The system board < threshold. name > current is outside of the optimum range. Details System board < Action 1. Review system power policy. 2. Check system logs for power related failures. 3.

Error Code Message Information ASR0003 Message The watchdog timer power cycled the system. Details The operating system or an application failed to communicate within the time-out period. The system was power-cycled. Action Check the operating system, application, hardware, and system event log for exception events.

Error Code Message Information Action Review the technical specifications for supported processor types. CPU0010 number > is throttled. Message CPU < Details The CPU is throttled due to thermal or power conditions. Action Review system logs for power or thermal exceptions. CPU0023 number >...

Error Code Message Information 2. Turn off the system and remove input power for one minute. 3. Ensure the processor is seated correctly. 4. Reapply input power and turn on the system. 5. If the issue persists, see Getting Help. CPU0702 Message CPU bus parity error detected.

Error Code Message Information 5. If the issue persists, see Getting Help. FAN0000 number > RPM is less than the lower warning threshold. Message Fan < Details Fan operating speed is out of range. Action Remove and reinstall the fan. If the issue persists, see Getting Help.

Error Code Message Information Action Check if the cable is present, then reinstall or reconnect. MEM0000 Message Persistent correctable memory errors detected on a memory device location >. at location(s) < Details This is an early indicator of a possible future uncorrectable error. Action Re-seat the memory modules.

Error Code Message Information location >. Power cycle system. LCD Message Memory mirror lost on < Details The memory may not be seated correctly, misconfigured, or has failed. Action Check the memory configuration. Re-seat the memory modules. If the issue persists, see Getting Help.

Error Code Message Information Action Cycle input power, update component drivers, if device is removable, reinstall the device. PCI1320 Message A bus fatal error was detected on a component at bus bus >device< device >function < func >. < bus > device < device > function < func >. Power LCD Message Bus fatal error on bus <...

Error Code Message Information number > removed from disk drive bay < bay >. Check drive. LCD Message Drive < Details The controller detected that the drive was removed. Action Verify drive installation. Re-seat the failed drive. If the issue persists, Getting Help.

Error Code Message Information PSU0006 number > type mismatch. Message Power supply < LCD Message Power supply < number > is incorrectly configured. Check PSU. Details Power supplies should be of the same input type and power rating. Action Install matched power supplies and review proper configuration in this manual.

Error Code Message Information PSU0034 number >. Message An under voltage fault detected on power supply < LCD Message An under voltage fault detected on PSU < number >. Check power source. Details This failure may be the result of an electrical issue with cables or subsystem components in the system.

Error Code Message Information PSU1201 Message Power supply redundancy is lost. Details The power supply tries to operate in a degraded state. System Performance and power redundancy may be degraded or lost. Action Check input power. Reinstall the power supply. If the issue persists, Getting Help.

Error Code Message Information Details An error was reported during a SD card read or write. Action Reseat the flash media. If the problem persists, see Getting Help. RFM1014 name > is write protected. Message Removable Flash Media < name > is write protected. Check SD Card. LCD Message Removable Flash Media <...

Error Code Message Information SEC0031 Message The chassis is open while the power is on. LCD Message Intrusion detected. Check chassis cover. Details The chassis is open. System performance may be degraded, and security may be compromised. Action Close the chassis. Check system logs. SEC0033 Message The chassis is open while the power is off.

Error Code Message Information Action Re-configure system to the minimum supported configuration. If issues persists, contact support. TMP0118 Message The system inlet temperature is less than the lower warning threshold. LCD Message System inlet temperature is outside of range. Details Ambient air temperature is too cool.

set custom lcd panel text on poweredge r720 for sale

This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.

set custom lcd panel text on poweredge r720 for sale

When it comes to the dell servers, there are 2 ways to clear the event log. One of them does NOT require a restart, which is nice if your server is up and running. If your server is not booting into the OS, or if the iDRAC web interface is not working, there is a 2nd way involving the hardware that requires a restart. Let"s go through each method one at a time.

This method is great if you don"t want to restart your server, your iDRAC is configured with a known IP address, and you"re machine is up and running. You can do this method without internet access, as long as you can access your server via an IP address.To do this, the first step is to log into the IP address using your web browser. Mine is set to the default setting, which is 192.168.0.120.

Now you"ll be prompted with your user ID and password for your iDRAC. You should know this information, but if this is your first time accessing your iDRAC this way, the defaults are “root” for the username, and “calvin”, all lowercase, for the password. Make sure the dropdown box says “this iDRAC” and then click submit.

Click “clear log”. At this point, your event log should be cleared. You can log out of the iDRAC if you have nothing else to do here. Wait a few minutes, and the LCD screen on the front of your machine should go from Amber to the standard blue, indicating that there are no persistent errors at the moment. If after a few minutes, the screen is still amber, make sure to go through the errors using the buttons on the screen. If you are still getting an error, it could be that the problem is persistent and something in your machine is not ideal and needs to be fixed before clearing the event log, which will bring the screen back to standard blue. An example of this would be if your raid cables were missing or plugged into the wrong ports. In that instance, the LCD Amber error light will not go away until the machine has detected new Raid cables in the machine and then the machine is rebooted again.

One last thing to note here is that if you open the lid on your server, but have no other errors when your machine boots up, you will get an Amber LCD screen for only a minute while the machine boots, and the error will say “intrusion”, but this will go away after about a minute and the LCD screen will go back to blue.

2nd Method: Hardware way, using Ctrl + E on bootupFor this method, the first thing we need to do is restart the server. Make sure you have your company’s permission before you continue.

Then you will get to the next POST screen, which displays all the information of your machine and starts listing options. The option you are looking for will say “Press Ctrl + E to enter remote access setup within 5 seconds...” at the bottom of your screen. Press Ctrl + E immediately when you see that.

After 10 seconds, you will be given two options, to either view or clear the event log. You can clear it if you want, but this is a GREAT opportunity to see what is in the event log. If you are having issues with your server hardware, this is a great place to start looking, but if you simply need to clear it, just use the clear option and hit enter. Clearing the log should be instantaneous.

Once it"s cleared, hit escape until you exit the Remote Access screen. At this point, your machine will continue to boot up as normal. Your LCD screen should go back to the standard blue soon, if there are no persistent errors. If it remains Amber after a minute, use the arrows on the LCD screen to see what errors are still coming up.

2nd Method: Hardware way, using F2 on bootup( 12th and 13 Gen)For this method, the first thing we need to do is restart the server. Make sure you have your company’s permission before you continue.

Then you will get to the next POST screen, which displays all the information of your machine and starts listing options. You are going to want to Press F2 at the end of POST.

If it worked, you will enter this screen. Simply use the down arrow key to navigate all the way to the bottom of the list where it says to the IDRAC setting. Hit enter.

After 10 seconds, you will be given two options, to either view or clear the event log. You can clear it if you want, but this is a GREAT opportunity to see what is in the event log. (If you do not see the system event logs option you may need to update your idrac and system bios as this was a feature that was added later )

Once it"s cleared, hit escape until you exit the Remote Access screen. At this point, your machine will continue to boot up as normal. Your LCD screen should go back to the standard blue soon, if there are no persistent errors. If it remains Amber after a minute, use the arrows on the LCD screen to see what errors are still coming up.

This finalizes the steps you should take to clear the event log on the Dell PowerEdge 12th and 13th Generation.(Models: R720, R720xd, R730, R730xd, R820, R830, R920, R930)