Profile for Charles Hooper > Reviews

Browse

Charles Hooper's Profile

Customer Reviews: 46
Top Reviewer Ranking: 14,344
Helpful Votes: 870




Community Features
Review Discussion Boards
Top Reviewers

Guidelines: Learn more about the ins and outs of Your Profile.

Reviews Written by
Charles Hooper RSS Feed (Michigan, USA)
(REAL NAME)   

Show:  
Page: 1 | 2 | 3 | 4 | 5
pixel
Synology America Disk Station 4-Bay Network Attached Storage (DS415+)
Synology America Disk Station 4-Bay Network Attached Storage (DS415+)
Price: $599.99
21 used & new from $599.99

22 of 23 people found the following review helpful
5.0 out of 5 stars Some Known Compromises, but a Value Leader for the Functionality and System Management Simplicity Offered by the NAS, October 29, 2014
Verified Purchase(What's this?)
I have previously purchased and implemented Synology Diskstation DS1813+, DS412+, DS214+, DS212+, DS213j, and DS112j units, so Synology network attached storage (NAS) devices are not entirely new to me (I also have experience with administering various Linux and Windows servers). Most of the Synology NAS units are configured primarily as FTP destinations, although the units also provide one or more Windows shares to network computers using either Active Directory integration or Synology Diskstation internal user accounts, as well as offering network time protocol (NTP) services (to security cameras, Active Directory, and/or a PBX system), and Nagios network monitoring.

For the most part, the Synology NAS units have been very reliable. That said I have experienced occasional problems with most of the NAS units that provide FTP services to security cameras. Eventually, all of the permitted client connections become "in use" due to the Synology sometimes remembering FTP connections long after the security cameras have forgotten about those connections. This connection "remembering" issue causes a situation where client computers attempting to connect for Windows file sharing are denied access to the server, but the problem also affects the web-based access to the Synology DSM operating system. There have been issues with the DiskStation DS412+ locking up roughly 90% of the time that a reboot is attempted through the web-based DSM, resulting in a blue flashing light on the front console that could only be fixed by pulling the electrical power cord (note that it is usually possible to kill phantom connections from the DSM interface, if that interface will display, so that a reboot is typically not required to recover from the "remembered" connections). None of the other DiskStations have experienced lockups during an attempted reboot (or any other lockups that I am able to recall).

The DS415+ was bought to take the place of a DS212+, whose CPU simply cannot keep pace with 15+ high definition security cameras feeding the NAS with motion triggered video clips via FTP. I had considered purchasing the new version of the DS1813+ (possibly called a DS1815+), but that model has not been released yet, probably would have the same Intel CPU model as the DS415+ (the DS1812+, 1813+, and DS412+ all have essentially the same CPU model), and likely would have had a higher electric wattage consumption compared to the DS415+ if I filled all drive bays. So, I selected the DS415+ as a device that had some known compromises, but with also some power efficiency benefits that are not present in the DS1813+ and DS412+.

The DS415+ ships with 2GB of memory in a regular memory slot, rather than being soldered to the system board as is the case for the DS412+, opening the possibility for future memory expansion. With two gigabit network ports, two USB 3 ports (one USB 2), and one eSATA port , the Synology DiskStation DS415+ offers decent storage expansion options, although those options are more limited than what is offered by the DS1813+. The DS415+ internally supports up to four hard drives in one of several software RAID levels (SHR, RAID 1, RAID 5, RAID 6, and RAID 10). Drive installs are potentially performed without using a screwdriver, although screws are provided to hold the drives in place if the screw-less arrangement seems too flimsy. Unlike the DS1813+, the drive carriages are held in place by a thumb-release locking clip, rather than a flimsy lock and key mechanism. The DiskStation DS415+ more than triples in weight with four typical hard drives installed - the light weight construction seems to be typical of the various Synology NAS units (at least those that support eight or fewer drives).

The DS415+ ships without an installed operating system, so the first task after powering on the DS415+ with the hard drives installed involves installing the latest DSM operating system. The process for installing the operating system is fairly simple, unless there is another DiskStation NAS on the same LAN (the directions provided in the printed quick start guide caused the DSM web page for another already set up Synology NAS to appear, rather than the operating system installation page for the DS415+ - the old Synology setup program that used to ship on CD with the NAS units probably would have helped in this situation). Once the NAS has nearly automatically downloaded the latest version of the operating system, the operating system installation should complete in a couple of minutes without a lot of issues.

The Synology DSM operating system offers a fantastic graphical user interface which implements HTML5 and CSS, displaying the interface in a web browser. Unfortunately, Synology tends to rearrange the location of various settings with each DSM version (and change the shape/color of icons), which makes it a little confusing when managing different Synology NAS units. Much like Windows Explorer, the File Station utility that is built into the DSM operating system supports context sensitive drag and drop, and well as right mouse button popup menus. The File Station utility that is included in the latest DSM version supports displaying more than 300 files in a paged view - that 300 file limit was an irritation when attempting to copy, move, or delete several thousand security camera videos on a daily basis through the GUI using older DSM versions. Like the other DSM models, the DS415+ supports telnet sessions, which allow access to the Linux command line and the configuration of scheduled script execution through the modification of the /etc/crontab file (side note: I have had issues with only the DS112j automatically resetting the contents of the /etc/crontab file when the DiskStation was power cycled - I believe that problem was caused by the use of spaces rather than tabs as field delimiters in the file).

A plain vanilla install of the DSM 5.0-4528 (as of today at update 1) offers support for network shares (Windows, MAC, and NFS), iSCSI, Active Directory Integration, FTP (standard FTP, anonymous FTP, FTPS, SFTP, TFTP), website hosting, WebDAV, SNMP, network time protocol (NTP), remote command line with telnet or SSH, integrated firewall, VPN client, USB printer sharing, and a handful of other capabilities. The DSM operating system's native functionality is easily expanded through the download of free software packages from the Package Center. The packages extend the DS415+'s capabilities to include antivirus, Asterisk IP phone server, Internet radio rebroadcasting to networked computers, DNS server functionality, iTunes Server, VPN server, RADIUS server, email server, CRM and ERP packages, Wordpress, IP camera monitoring (now includes a license for two IP cameras, additional licenses are roughly $50 per camera), and a variety of other features. Additionally, ipkg support permits the installation of more than 900 additional applications, including C++ compilers - which in theory suggests that the source for the Nagios network monitoring utility can be downloaded and compiled on the DS415+ (I was able to compile Nagios on a DS1813+, DS412+, and DS212+, and am close to having Nagios working on the DS415+).

I installed four new Western Digital Red 6TB drives, configured in a software RAID 10 array (DSM offered to automatically configure the drives in a SHR array during the initial setup, but did not offer a RAID 10 configuration at that time, so configuring the drives for RAID 10, to reduce recovery time in the event of a drive failure, requires a couple of additional mouse clicks). Peak single network link data transfer speeds so far have been impressive, at close to the maximum possible transfer rate for a gigabit network (achieving roughly 112-115MB/s ~ 919Mb/s), which is virtually identical to the speed seen with the DS1813+ that was using four 3TB Western Digital Red drives, and significantly faster than the DS212+ which has a much slower non-Intel CPU and two Western Digital Green 2TB drives. Pushing approximately 41.6GB of large files to the DS415+ from a client computer consumed between 9% and 11% of the DS415+'s CPU (for comparison, this test consumed 20% of the DS1813+ CPU capacity).

I did not test the DiskStation's IEEE 802.3ad dynamic link aggregation - there was no apparent benefit when I tested the feature with the DS1813+, an HP 4208vl switch, and two client computers. The gigabit switch to which the DS415+ is attached does not support IEEE 802.3ad dynamic link aggregation, so it would have been a very bad idea to connect both of the supplied network cables to the switch.

Power Consumption of the DS415+(based on the output of a Kill-A-Watt meter):
* 1.1 watts when powered off
* 16 watts with no drives installed and unit is sitting idle
* 44 watts with four Western Digital Red 6TB drives while the unit is receiving files at a rate of 112-115MB/s (for comparison, this test required 46 watts with the DS1813+ when outfitted with four Western Digital Red 3TB drives)
* 39 watts with four Western Digital Red 6TB drives installed while the unit is sitting idle for a couple of minutes (identical to the value measured for the DS1813+)
* 14.5 watts with four Western Digital Red 6TB drives hibernating

Even though the throughput and CPU of the DS415+ with software based RAID are no match for the performance and capacity of a high end Windows or Linux server, the Synology NAS units consume far less electrical power, are competitively priced (even though these units are expensive once four 6TB drives are added), should yield a lower total cost of ownership (TCO), and are likely easier to configure and maintain for their intended purpose than either a Windows or Linux server. Like the DS1813+, the DS415+ supports up to 512 concurrent remote connections from other devices (a computer with five mapped drives pointing to the DS415+ consumes five of those 512 concurrent connections). The 512 connection count may not be the hard upper limit on the Synology NAS units - I have encountered some problems with the DS112J blocking connection attempts long before its 64 concurrent limit is reached - I do not yet know if this issue affects any of the other Synology device models. The lack of an available redundant power supply is a shortcoming of the DS1813+ and other less expensive Synology NAS units, but the power supply for the DS415+ (and the DS412+) is external, so it should be easier to obtain and install replacement power supplies for the DS415+ should the need arise (the power supply may not have a standardized connection, which would permit a replacement power supply to be purchased from a third party supplier).

Synology offers a group of customer support forums. However, those forums are apparently not actively monitored by Synology support staff. So far, other than whether or not Plex on the DS415+ is able to transcode 1080P videos, there has been no significant negative comments about the DS415+ on the Synology forums.

The Synology DiskStation DS212+ has served its role surprisingly well for the last two and a half years, even when equipped with slow Western Digital Green drives in a software RAID 1 array. While that NAS was able to support 15+ cameras that potentially simultaneously send video clips via FTP, concurrently allowing a Windows client to connect to the share for the purpose of reviewing the video clips was often just a bit too much of a load for the less powerful DS212+. I am expecting few problems from the DS415+ when serving in a similar role along with supporting a couple of optional packages such as the Media Server, Audio Station, Nagios (currently receiving a Segmentation fault (core dumped) error message when executing the check_ping test command found in my "Install Nagios on a Synology DiskStation DS1813+ or DS412+" blog article), and possibly Plex. Most of the optional Synology packages appear to be decent. However, the Synology Surveillance Station, while possibly useful, still seems to be an overly fragile, overly expensive, experimental package that tends to tax the wireless and wired network much more than the FTP solution that I use with my cameras (your experience with that package may be different than mine).
Comment Comments (2) | Permalink | Most recent comment: Oct 31, 2014 3:29 AM PDT


EnGenius Technologies Long Range 11n 2.4GHz Wireless Bridge/Access Point (ENH202)
EnGenius Technologies Long Range 11n 2.4GHz Wireless Bridge/Access Point (ENH202)
Price: $84.96
109 used & new from $70.97

1 of 1 people found the following review helpful
5.0 out of 5 stars High Powered and Reliable Indoor/Outdoor Wireless Access Point that is a Little Challenging to Configure, August 18, 2014
Verified Purchase(What's this?)
Prior to buying the EnGenius ENH202, I had been using a pair of Linksys E2000 routers as wireless access points for a non-commercial wireless network. One of those routers frequently locked up, likely due to the volume of wireless traffic flowing from wirelessly attached high-definition security cameras. The second Linksys E2000 router seemed to be stable, but the wireless range for the unit is typically limited to 50 feet or less. Last October I replaced one of the Linksys E2000 routers with an EnGenius ENH202 wireless access point (note that the ENH202 is NOT a router), with the ENH202 temporarily located inside a building, leaning against an exterior wall. The ENH202 has not experienced a single lockup since being powered on.

With the Engenius ENH202 configured for with a 29 dBm transmit power (800 mW) and a 1 KM distance setting, a laptop roughly 45 degrees off-axis to the access point showed a five out of five bar signal at a distance of about 180 feet with the wireless signal shooting through an exterior wall, with a clear line of sight between the laptop and the outside of the exterior wall. The same laptop showed a four out of five bar signal at a distance of about 385 feet with the wireless signal shooting through an exterior wall, with a clear line of sight between the laptop and the exterior wall. An indoor-rated wireless high-definition camera maintained a four out of five bar signal strength at a distance of roughly 200 feet from the ENH202, with one exterior wall in between. An outdoor rated wireless high-definition camera maintained a four out of five bar signal strength at a distance of 417 feet while the camera was sitting on a chair, with one exterior wall and 26 feet of tall, somewhat heavy grass in between (if I had elevated the camera 10 feet, the camera might have maintained that same signal strength to 450 feet or more).

Compared to the Linksys E2000 routers, the EnGenius ENH202 wireless access points are more difficult to configure. The configuration of the EnGenius ENH202 as a wireless access point, however, is a little easier than the configuration of a Cisco Aironet 1262 wireless access point. A typical user of consumer-grade routers, such as the Linksys E2000, will likely be stumped by the EnGenius ENH202 due to the following characteristics:
1) The ENH202 ships with a static IP address (192.168.1.1) that must be changed to an IP address on your network's subnet that does not conflict with the IP addresses used by other devices on your network. There is a chance that the default static IP address could conflict with an existing Internet router if the current subnet is 192.168.1.0.
2) The ENH202 ships configured to operate in client bridge mode, rather than wireless access point mode (under the System heading, click Operation Mode, select "Access Point", then click Save & Apply). Note that some wireless devices (including some Cisco-Linksys USB wireless adapters) do not work reliably when the wireless channel mode is set to 40 MHz, reverting to 20 MHz may fix that problem.
3) Changes made to the configuration of the ENH202 do not take effect immediately. Changes to settings are "batched" together, only applied when the "Save & Apply" button is clicked in the Save/Reload page, located under the Status heading.
4) While the ENH202 supports power over Ethernet, it requires 24 volts injected into the Ethernet cable from the supplied power supply, rather than using the power over Ethernet standard 48 volts.

The Engenius ENH202 seems to work very well when mounted inside a building, although it is advisable to not linger for too long near the access point when it is set to function with a 29 dBm transmit power. I installed a second Engenius ENH202 on the outside of a building for a friend, and that device has been working without issue for roughly seven months. Thus far I am impressed with the performance and reliability of the ENH202, even though the configuration of the first unit was slightly confusing.


150ft Cat6 Outdoor Waterproof Ethernet Cable Direct Burial 150 ft (600 MHZ) Shielded (PURE COPPER)
150ft Cat6 Outdoor Waterproof Ethernet Cable Direct Burial 150 ft (600 MHZ) Shielded (PURE COPPER)
Offered by Ultra Spec Cables (RiteAV®)
Price: $79.95
2 used & new from $79.95

0 of 1 people found the following review helpful
5.0 out of 5 stars Works Well for Power Over Ethernet Runs to Security Cameras, Possible to Cut to Create a Shorter Length Cable, August 18, 2014
Verified Purchase(What's this?)
I have had one of these (patch) cables direct buried roughly six inches below grade for the last 10 months without any significant issues. This particular cable has been used to provide a 100Mb/s network connection and power to a security camera that draws up to 6 watts at 48 volts (POE standard) with a few stability problems (something causes the camera to occasionally reboot and then resume normal operation, but I do not know if the cable is at fault) since the cable was put into use 10 month ago, including a couple of days where the temperature did not rise above -13F, as well as a couple of days where the temperature was close to 90F. I recently installed two additional security camera runs using this cable, with the cable buried in plastic water pipe roughly three inches below grade.

This CAT 6 patch cable is very stiff compare to a typical CAT 5e or CAT 6 indoor patch cable. The stiffness is an asset when trying to push the cable through 50 feet of narrow plastic water pipe (or conduit), but a problem when trying to bend through tight radius conduit sweeps. It is possible to kink the cable, so care must be taken when uncoiling the cable. The cable is not gel filled, so if the cable jacket is damaged (possibly from a bad kink), the cable should be immediately replaced. The cable ends have metal sides, and the ends are molded into the cable body, making for a durable outdoor connection into wireless access points and security cameras (note that the cable end should be shielded from direct exposure to the weather).

With the cable ends attached, it is possible to squeeze two cables through 3/4 inch inside diameter conduit (or water pipe) if the two cable ends are staggered when pulled though. By cutting off the cable ends it is possible to squeeze two cables though 1/2 inch inside diameter conduit (this was a tight squeeze, so be careful when pulling the cable through long conduit runs). Because the cable is not gel filled, installing a standard RJ45 connector on a cable where the end was cut off is not any more difficult than installing an end on standard CAT 5e or CAT 6 indoor bulk cable.


100ft Cat6 Outdoor Waterproof Ethernet Cable Direct Burial 100 ft (600 MHZ) Shielded (PURE COPPER)
100ft Cat6 Outdoor Waterproof Ethernet Cable Direct Burial 100 ft (600 MHZ) Shielded (PURE COPPER)
Offered by Ultra Spec Cables (RiteAV®)
Price: $69.95
2 used & new from $69.95

2 of 2 people found the following review helpful
5.0 out of 5 stars Not Gel Filled, but Works Well for Power Over Ethernet to a Security Camera, August 18, 2014
I have had one of these (patch) cables direct buried roughly six inches below grade for the last 10 months without any issues. This particular cable has been used to provide a 100Mb/s network connection and power to a security camera that draws up to 6 watts at 48 volts (POE standard) with zero downtime/stability problems for more than 240 days, including a couple of days where the temperature did not rise above -13F, as well as a couple of days where the temperature was close to 90F. I recently installed two additional security camera runs using this cable, with the cable partially directly buried three to six inches below grade, and partially buried in conduit at the same depth. So far, there are no stability issues with the security cameras connected to those network cables.

This CAT 6 patch cable is very stiff compare to a typical CAT 5e or CAT 6 indoor patch cable. The stiffness is an asset when trying to push the cable through 50 feet of narrow plastic pipe (or conduit), but a problem when trying to bend through tight radius conduit sweeps. It is possible to kink the cable, so care must be taken when uncoiling the cable. The cable is not gel filled, so if the cable jacket is damaged (possibly from a bad kink), the cable should be immediately replaced. The cable ends have metal sides, and the ends are molded into the cable body, making for a durable outdoor connection into wireless access points and security cameras (note that the cable end should be shielded from direct exposure to the weather).

With the cable ends attached, it is possible to squeeze two cables through 3/4 inch inside diameter conduit (or water pipe) if the two cable ends are staggered when pulled though. By cutting off the cable ends it is possible to squeeze two cables though 1/2 inch inside diameter conduit. Because the cable is not gel filled, installing a standard RJ45 connector on a cable where the end was cut off is not any more difficult than installing an end on standard CAT 5e or CAT 6 indoor bulk cable.


TriVision Replacement 4mm Focus Length Lens for TriVision NC-326W/NC-326PW/NC-336W/NC-336PW HD 720P/1080P Megapixel Outdoor Cameras
TriVision Replacement 4mm Focus Length Lens for TriVision NC-326W/NC-326PW/NC-336W/NC-336PW HD 720P/1080P Megapixel Outdoor Cameras
Offered by TriVision Tech., LLC
Price: $25.00

1 of 1 people found the following review helpful
5.0 out of 5 stars Swapping the Factory Installed 6mm Lens for a 4mm Lens Results in a 50% Wider View Angle, August 14, 2014
Verified Purchase(What's this?)
I have several of the TriVision NC-336PW cameras, the seven most recent of which shipped with a 6mm lens, and the remaining cameras shipped with a wider angle 3.8mm lens. The 3.8mm lens equipped cameras have a roughly 50% wider viewing angle at the same distance, so those cameras cover a much wider and taller viewing area. The tradeoff with the 3.8mm lens equipped cameras is that there is a greater bowing distortion in the video image, objects in the captured image are slightly less detailed (each object captured on video is smaller, thus there are fewer pixels defining an object), and roughly 20% of the image at the far left and far right is not illuminated at night by the built-in infra-red lights. The same tradeoffs affect the cameras with the replacement 4mm lens.

I recently bought and installed two of the 4mm lenses offered by TriVision, hoping that the lenses would provide a wider viewing angle for a couple of the cameras. While the pictures associated with the lens' product page might suggest that the lens ships mounted to a green circuit board, the first picture on the product page shows what will actually arrive unprotected in a paper envelope. One of the lenses shipped with a black cover over the end of the lens, while the other shipped with a clear cover that was not attached, lying loosely in a plastic bag inside the envelope.

To replace a lens, you will need a #3 Phillips screw driver to remove the sunshade, a #1 Phillips screw driver to remove the three screws that hold the infra-red light board in place and to loosen (but NOT remove) the black screw that holds the lens in position, a pair of vise grips or Channel Locks to unscrew the old lens (thread lock was used on the old lens, so it is nearly impossible to turn by hand), and a pair of needle nose plyers to help with reinstalling the infra-red light board's screws. Place a rug, blanket, or towel directly under the camera when removing or reinstalling the infra-red light board.

As mentioned in John Queern's review, the replacement lenses ship without installation instructions. Once the sunshade is removed, unscrew the front cap of the camera, and remove the three screws that hold the infra-red light board in place. Allow the infra-red light board to hang off the side of the camera. Using the #1 Phillips screw driver, loosen the black screw that prevents the old lens from moving - do not remove the screw because it is very short, and just about impossible to reinstall. Thread lock was likely used to help hold the old lens in place, so it is necessary to use vise grips (or a similar tool) to grab the edge of the old lens barrel and turn it counter clockwise to remove it. Hold the lens board with a free hand to keep it from twisting while the old lens is removed. Once the old lens is removed, take the new lens out of the plastic bag, and carefully start screwing the new lens into the lens board by hand, being careful not to touch either end of the lens. The lens should be easy to install by hand - screw the lens into the threads roughly five turns. Power on the camera with the camera mounted in its expected final installation location, and use a laptop near the camera to view the camera's live video feed. The lens will need to be focused by turning the lens either to the left of right in the threads. An 1/8 of a turn could make the difference between a blurred view of a license plate and a clear read of a license plate, so it might be helpful to position a vehicle slightly off-center roughly 20 to 30 feet from the camera while adjusting the camera's focus. Once the focus is set, tighten the black screw that holds the lens in place, place the infra-red light board on the three mounting posts, use the needle nose plyers to hold the screws while the screws are tightened, reinstall the front cap of the camera, and reinstall the sunshade.

The viewing angle with the 4mm lens installed is nearly as wide as that of the cameras that shipped with the 3.8mm lens. While I am disappointed that the NC-336PW cameras no longer ship from the factory with the 3.8mm lens, a camera with the 4mm lens installed seems to produce a slightly crisper recorded image than one of my older cameras with a 3.8mm lens, possibly due to spending a bit of time properly focusing the camera with the camera mounted in its final location. I just placed an order for two more lenses. The lenses are not cheap at $25 each, but if the cameras are bought two at a time, TriVision currently discounts the order by $40, which mostly offsets the cost of the optional lenses for the cameras.


Troubleshooting Oracle Performance
Troubleshooting Oracle Performance
by Christian Antognini
Edition: Paperback
Price: $40.44
37 used & new from $33.93

10 of 10 people found the following review helpful
5.0 out of 5 stars Extensively Researched with Detailed Analysis, Providing Insight that is Not Available from Other Sources, July 15, 2014
Verified Purchase(What's this?)
I pre-ordered this book in December 2013, having previously read the first edition of this book twice. Once the printed copy of the second edition book arrived, I ordered the PDF companion for the book from Apress. While the first edition of the book covered Oracle Database 9.2.0.1 through 11.1.0.6, the second edition targets versions 10.2.0.1 through 12.1.0.1+, the author successfully scrubbed the book of all information that is not relevant to the targeted Oracle versions. Despite the removed obsolete content and the new page formatting that places approximately 17% more content per page, the page count for the second edition of the book grew by roughly 130 pages.

Some of the improvements in the second edition, when compared to the first edition of this book:
* Extended explanation of the different definitions of the term cardinality. Pg 19

* Second edition of the book added a half page definition of the term cursor. Pg 21

* The description of V$SQL_CS_HISTOGRAM was omitted from the first edition of this book, and is now included. Pgs 37-39

* The Instrumentation section that was found in chapter 3 of the first edition is now relocated into chapter 2. Pgs 42-48

* A new section was added in this edition of the book that is intended to guide the reader in attacking performance problems using different procedures, based on whether or not the problem is reproducible. Chapters 3 and 4

* A new roadmap flow chart was added to the second edition, showing how to begin the performance tuning process. Pg 104

* Page 204 of the first edition of the book stated that it was not possible to retrieve a Statspack captured execution plan using DBMS_XPLAN - that statement was incorrect. Page 306 of the second edition contains a corrected statement: "Statspack stores execution plans in the stats$sql_plan repository table when a level equal to or greater than 6 is used for taking the snapshots. Even though no specific function is provided by the dbms_xplan package to query that repository table, it's possible to take advantage of the display function to show the execution plans it contains."

* The second edition of the book includes a new SQL optimization techniques chapter - the book seems to be making a more dedicated effort to help the reader understand the decision process that determines when to use the various techniques to attack performance issues - explaining the decision tree for performance tuning. Chapter 11, SQL Optimization Techniques, is a good example of the enhancements made to the second edition.

Over the last several years I have read (and reviewed) a number of Oracle Database performance related books, including the freely available Performance Tuning Guide that is part of the official Oracle Database documentation. None of the books, including the official Performance Tuning Guide (at least three errors identified in the first 100 pages of the 12.1.0.1 version), is completely free of errors (wrong, omitted, or obsolete information). However, this book sets the technical content accuracy bar extremely high for books that cover a broad-range of Oracle performance related topics.

As was the case for the first edition of this book, there are several factors that separate this book from the other broad-ranging Oracle Database performance books on the market:
* For every feature that is described to help solve a problem, as many as possible of the benefits are listed, and an equal amount of attention is paid to the potentially wide-ranging problem areas of the various solutions. Very few potential problems were overlooked in this book. Some of the other books on the market only describe the potential benefits of implementing a feature, without discussing limitations or unintended side-effects. One such example is the discussion of the CURSOR_SHARING parameter in two different books. On page 434 the "Troubleshooting Oracle Performance" book the following warning is provided "Cursor sharing has a reputation for not being very stable. This is because, over the years, plenty of bugs related to it have been found and fixed... my advice is to carefully review Oracle Support note 94036.1..." This quote is in contrast to the following quotes from pages 191 and 484 of the book "Oracle Database 12c Performance Tuning Recipes", "Although there are some concerns about the safety of setting the CURSOR_SHARING parameter to FORCE, we haven't seen any real issues with using this setting." "There are really no issues with setting the cursor_sharing parameter to a nondefault value, except minor drawbacks such as the nonsupport for star transformations, for example." (Reference) (Reference)

* For nearly every feature described in the book, the book lists the licensing and version requirements (sometimes to a specific point release such as 10.2.0.3, 10.2.0.4, 11.2.0.4) that are required so that the reader is able to take advantage of the feature - these requirements are often listed early in the description of the feature (the monitoring/tuning discussion in chapters four and five contain several good examples). The book commonly describes how to accomplish a task in the current Oracle Database release, as well as older releases, if the approach differs. Some of the other books on the market inter-mix features and behaviors in various Oracle Database releases, without clearly distinguishing what will and what will not be available in the reader's environment.

* While many strong statements are made about Oracle Database in the book, there is no "hand waiving", and there are very few inaccurate statements. The book uses a "demonstrate and test in your environment" approach from cover to cover. The downloadable scripts library is extensive with roughly 280 scripts and trace files, and those scripts often contain more performance information than what is presented in the book. It is thus recommended to view the scripts and experiment with those scripts while the book is read. The scripts are currently downloadable only from the author's website. In contrast, other books seem to take the approach of "trust me, I have performed this task 1,000 times and never had a problem" rather than the "demonstrate and test in your environment" approach as was used in this book.

* Information in this book is densely packaged, without unnecessarily repeating information, and without giving the impression that sections of the book are a paraphrase of some other set of articles, a paraphrase of chapters in the official Oracle documentation, or a reprint of a page that was originally copyright in 1997. Additionally, the information is well organized into a logical progression of topics, rather than each section of the book appearing as an island of unrelated information.

* The well-placed graphics throughout the book support the contents of the book, rather than distract from the information that is described.

* The book makes extensive use of forward and backward references to other sections in the book, as well as suggestions to review specific Oracle support documents and other books. Some of the other books handle each chapter as an information silo, never (or rarely) mentioning specific content found elsewhere in the book.

* In the acknowledgments section at the beginning of the previous book edition the author mentioned that his English writing ability is poor and that "I should really try to improve my English skills someday." While the English wording in the first edition of the book was easily understood, I took issue with the author's repeated use of the phrase "up to" when describing features that exist in one Oracle Database release version or another. The second edition of the book fixes that one issue that I pointed out, typically replacing the text with "up to and including", and overall the technical grammar in the second edition of the book is among the best that I have seen in a couple years. It appears that the author exercised great care when presenting his information on each page. In contrast, some of the other Oracle Database book authors seem to be more concerned with slamming something onto the page so that something else that is more interesting could be introduced, in the process introducing sentences that can best be described as non-sense.

* Almost without exception the issues that were identified as wrong, misleading, or incomplete in the first edition of the book were corrected in the second edition. Unfortunately, the same cannot be said about other books that survived to see a second or third edition.

The second edition of "Troubleshooting Oracle Performance" is of value to Oracle Database administrators, programmers, and Oracle performance tuning specialists. Chapter one of this book should be required reading for all people intending to be developers, regardless if the person intends to build advanced Oracle Database solutions or just simple Microsoft Access solutions. One of my favor quotes from the book is found on page three, "Performance is not merely optional, though; it is a key property of an application." Ideally, this book should be read after reading the "Expert Oracle Database Architecture" book (or the Concepts Guide found in the Oracle Database documentation library), and before advancing to books such as "Cost-Based Oracle Fundamentals" or "Oracle Core: Essential Internals for DBAs and Developers".

The full review of this book is quite long, currently covering the first 12 chapters (447 pages) of the book. As such, there is a good chance that this review will exceed the length limit imposed by Amazon - see my Oracle blog for the full review. The index at the back of most Apress books seems to be limited in value, so I have tried to include a useful index as part of this review.

---

Foundation Knowledge, and Miscellaneous Tips:
* The ten most common design problems: no formal logical database design; using generic tables (entity-attribute-value or XML); failing to use constraints, failing to implement physical design (partitioning, bitmap indexes, index organized tables, function-based indexes, etc); selecting the wrong data type for table columns (using a VARCHAR2 column to store dates); incorrect bind variable usage; failure to use RDBMS specific advanced features; avoiding PL/SQL when extensive data manipulation is required within a single database; excessive commits; non-persistent database connections. Pgs 8-11

* To avoid compulsive tuning disorder, there are three sources for identifying actual performance problems: user reported unsatisfactory performance, system monitoring reports time outs or unusual load, response time monitoring indicates performance that is outside of the parameters specified by the service level agreement. Pg 11

* Cardinality is the number of rows returned by an operation (estimated number of rows in an execution plan). Cardinality = selectivity *num_rows Pg 19

* "A cursor is a handle to a private SQL area with an associated shared SQL area." Pg 21

* Life cycle of a cursor is explained with a diagram. Pgs 21-23

* Good explanation of why hard parses and even soft parses should be minimized as much as possible. Pg 26

* Even though the OPTIMIZER_ENV_HASH_VALUE column value in V$SQL is different for a given SQL statement when the FIRST_ROWS, FIRST_ROWS_1, or FIRST_ROWS_1000 optimizer modes are used, that difference in the OPTIMIZER_ENV_HASH_VALUE column does not prevent a specific child cursor from being shared among sessions with those different optimizer modes. "This fact leads to the potential problem that even though the execution environment is different, the SQL engine doesn't distinguish that difference. As a result, a child cursor might be incorrectly shared." Pg 27 (Reference)

* Example of using Oracle's built-in XML processing to convert the REASON column found in V$SQL_SHARED_CURSOR into three separate regular Oracle columns. Pgs 27-28

* Benefits and disadvantages of using bind variables. Pgs 29-31, 32-39

* Adaptive cursor sharing (bind-aware cursor sharing) was introduced in Oracle 11.1. The IS_BIND_SENSITIVE, IS_BIND_AWARE, and IS_SHAREABLE columns of V$SQL indicate if a specific child cursor was affected (created or made obsolete) by adaptive cursor sharing. Pg 34

* Bind aware cursors require the query optimizer to perform an estimation of the selectivity of predicates on each execution. Pg 37

* Definition of different types of database file reads and writes. Pg 40

* Basic definition of Exadata and the goals of Exadata smart scans. Pg 41

* The database engine allows dynamically setting the following attributes for a session: client identifier, client information, module name, and action name. Pg 45

* Example of setting the client identifier information using PL/SQL, OCI, JDBC, ODP.NET, and PHP. Pgs 46-48

* 10046 trace levels 0, 1, 4, 8, 16, 32, and 64 are described. Pg 55

* See $ORACLE_HOME/rdbms/mesg/oraus.msg for a list of all debugging event numbers - not available on all operating system platforms. Pg 56

* Using DBMS_PROFILER requires the CREATE privilege on the PL/SQL code. DBMS_HPROF just requires execute on DBMS_HPROF. Pg 96

* Very good description of the performance related columns in V$SESSION. Pgs 115-116

* The MMNL backgroup process collects active session history data once a second. Pg 117

* Real-time monitoring is available starting in version 11.1, and requires that the CONTROL_MANAGEMENT_PACK_ACCESS parameter to be set to diagnostic + tuning. Pg 127

* Real-time monitoring is enabled for SQL statements only if the executions require at least 5 seconds, if the SQL statement is executed using parallel processing, or if the MONITOR hint is specified in the SQL statement. Pg 127

* The author's system_activity.sql script file produces output that is similar to the data contained in a Diagnostic Pack chart, without requiring a Diagnostic Pack license. Pg 143

* The author's time_model.sql script samples the V$SYS_TIME_MODEL dynamic performance view and outputs results that show the parent, child, and grandchild relationship between the various statistics. Pg 144

* Use an interval of 20 to 30 minutes for the Statspack or AWR sample period to limit the distortion effects of the reported averages (important problems may be hidden if the sample period covers many hours of time). Pg 152

* AWR dictionary views have a DBA_HIST or CDB_HIST (12.1 multitenant environment) prefix. Pg 152

* While the procedure for using Statspack is no longer described in the documentation, the spdoc.txt file in the $ORACLE_HOME/rdbms/admin directory describes how to install, configure, and manage Statspack. Pg 156

* Statspack data can be captured at levels 0, 5, 6, 7, or 10 (see the book for an explanation of what is captured at each level). Pg 157

* The book provides an example of automating the collection of Statspack snaps, and automatically purging old Statspack snaps after 35 days. Pg 159

* The book describes in detail the various inputs that are provided to the query optimizer, including: system statistics, object statistics, constraints, physical design, stored outlines/SQL profiles/SQL plan baselines, execution environment/initialization parameters/client side environment variables, bind variable values/data types, dynamic sampling, and cardinality feedback. The Oracle Database version, edition (Standard or Enterprise), and installed patches also potentially affect the plans generated by the query optimizer. Pgs 170-172

* Prior to version 11.1 the ANSI full outer join syntax was automatically translated into Oracle syntax utilizing a UNION ALL. Pg 187

* Access to the DBMS_STATS package is granted to public, but the GATHER_SYSTEM_STATISTICS role (automatically granted to DBA role) is required to change the system statistics in the data dictionary. Pg 192

* Bug 9842771 causes the SREADTIM and MREADTIM statistics to be incorrectly calculated when gathering system statistics on Oracle Database 11.2.0.1 and 11.2.0.2 unless patch 9842771 is installed. Pg 197

* The calculated CPU cost to access a specific table column is computed as the column position multiplied by 20. Pg 203 (Reference)

* When the mreadtim system statistic is null (has not been computed) or is smaller than the sreadtim system statistic, a formula is used to calculate the mreadtim static value when execution plans are generated. When the sreadtim system statistic is 0 or not computed, a formula is used to derive a sreadtim statistic value when execution plans are generated. If the MBRC system statistic is not set (or set to 0), the NOWORKLOAD system statistics are used. See page 204 for the formulas.

* The maximum number of buckets for histograms increased from 254 to 2048 in Oracle Database 12.1. pg 213

* Script to show tracked column usage that is used by DBMS_STATS. Note that USER should be replaced with the schema name that contains the specified table. Pg 242

* When object statistics are collected using the default NO_INVALIDATE parameter value of DBMS_STATS.AUTO_INVALIDATE, cursors that depend on the object for which statistics were collected will be marked as invalidated after a random time period that is up to five hours (as determined by the value of the _OPTIMIZER_INVALIDATION_PERIOD parameter; SQL statements using parallel execution will be immediately invalidated). Pg 244

* "Unfortunately, not all new features are disabled by this [OPTIMIZER_FEATURES_ENABLE] initialization parameter. For example, if you set it to 10.2.0.4 in version 11.2, you won't get exactly the 10.2.0.4 query optimizer." Pg 277

* "When the [memory utilization specified by the PGA_AGGREGATE_LIMIT] limit is reached, the database engine terminates calls or even kills sessions. To choose the session to deal with, the database engine doesn't consider the maximum PGA utilization [for each session]. Instead, the database engine considers the session using the highest amount of untunable memory." Pg 296

* EXPLAIN PLAN defines all bind variables as VARCHAR2, which may lead to unintended/unexpected data type conversion problems in the generated execution plan. EXPLAIN PLAN also does not take advantage of bind variable peeking, further limiting EXPLAIN PLAN's ability to accurately generate an execution plan for a previously executed SQL statement. Unfortunately, there are times when EXPLAIN PLAN shows the correct predicate information, while the typically more reliable DBMS_XPLAN.DISPLAY_CURSOR, V$SQL_PLAN view, and V$SQL_PLAN_STATISTICS_ALL view show incorrect predicate information for one or more lines in the execution plan. Pgs 302-303, 336, 339, 346, 348

* To have the query optimizer generate a 10053 trace whenever a specific SQL statement is hard parsed, execute the following command, replacing 9s5u1k3vshsw4 with the correct SQL_ID value: ALTER SYSTEM SET events 'trace[rdbms.SQL_Optimizer.*][sql:9s5u1k3vshsw4]' pg 308

* Description of the columns found in most execution plans. Pgs 312-313

* Description of the undocumented ADVANCED format parameter value for DBMS_XPLAN. Pg 316

* Adaptive execution plans, where the query optimizer in Oracle Database 12.1 is able to postpone some execution plan decisions (such as selecting a nested loops join vs. a hash join), requires the Enterprise Edition of Oracle Database. Pg 349

* The IS_RESOLVED_ADAPTIVE_PLAN column of V$SQL indicates whether or not an execution plan takes advantage of adaptive execution (use +ADAPTIVE in the format parameter of the DBMS_XPLAN call to see the adaptive portion of the execution plan). Pg 351

* Rather than just suggesting to the reader to add an index to avoid an unnecessary full table scan, the book includes the following important note: "For instance, if you add an index like in the previous example, you have to consider that the index will slow down the execution of every INSERT and DELETE statement on the indexed table as well as every UPDATE statement that modifies the indexed columns." Pg 361

* "Simply put, hints are directives added to SQL statements to influence the query optimizer's decisions. In other words, a hint is something that impels toward an action, rather than merely suggests one." Pg 363

* "However, mixing comments and hints don't always work. For example, a comment added before a hint invalidates it." This warning is an actual threat to intentionally included hints, and this warning was not included in the first edition of the book. Pg 366

* The default query block names assigned by the optimizer are: CRI$ CREATE INDEX statements, DEL$ DELETE statements, INS$ INSERT statements, MISC$ Miscellaneous SQL statements like LOCK TABLE, MRC$ MERGE statements, SEL$ SELECT statements, SET$ Set operators like UNION and MINUS, UPD$ UPDATE statements. Use the QB_NAME hint to specify a different, non-default query block name for use with various hints. Pg 369

* "One of the most common mistakes made in the utilization of hints is related to table aliases. The rule is that when a table is referenced in a hint, the alias should be used instead of the table name, whenever the table has an alias." Pg 371

* Cross reference between several initialization parameter values and the equivalent hint syntax. Pg 373

* A demonstration of creating a hacked stored outline for a SQL statement (use as a last resort when it is not possible to create a suitable outline using other techniques such as exp/imp or initialization parameter changes). Pgs 381-387

* SQL profiles, a feature of the Enterprise Edition with the Tuning Pack and the Diagnostic Pack options, are applied even when the upper/lowercase letters and/or the white space differs, and if the FORCE_MATCH parameter is set to true, a SQL profile may be applied even if the literals (constants) in a SQL statement differ. While SQL profiles allow text normalization, stored outlines and SQL plan management do not support the same degree of text normalization. Pgs 390, 394, 402

* SQL plan management, which requires an Enterprise Edition license, could be considered an enhanced version of stored outlines. Pg 402

* "Inappropriate hints occur frequently in practice as the reason for inefficient execution plans. Being able to override them with the technique you've seen in this section [SQL plan baseline execution plan replacement (stored outlines are also capable of removing embedded hints using the techniques shown on pages 381-387)] is extremely useful." Pg 408

* "What causes long parse times? Commonly, they are caused by the query optimizer evaluating too many different execution plans. In addition, it can happen because of recursive queries executed on behalf of dynamic sampling." Pg 433

* "The values provided by the parse count (total) and session cursor cache hits statistics are subject to several bugs." Details are provided on pages 437-438

---

Suggestions, Problems, and Errors:
* The following scripts are currently missing from the script library:
-- session_info.sql Pg 45 (in the script library as session_attributes.sql per the book author).
-- ash_top_files.sql, ash_top_objects.sql, and ash_top_plsql.sql Pg 136
-- search_space.sql Pg 169
-- incremental_stats.sql Pg 255 (8/22/14: now downloadable)
-- copy_table_stats.sql Pg 256 (8/22/14: now downloadable)
-- optimizer_index_cost_adj.sql Pg 288 (8/22/14: now downloadable)
-- display_statspack.sql Pg 306
-- dynamic_in_conditions.sql Pg 499
-- fbi_cs.sql Pg 506
-- reserve_index.sql should be reverse_index.sql Pg 671 (8/22/14: confirmed)

* Page 19 states that "selectivity is a value between 0 and 1 representing the fraction of rows filtered by an operation." I understand the intention of this statement, and the examples that follow the statement further clarify the author's statement. However, the "filtered" word in the statement seems to suggest that selectivity represents the fraction of the rows removed by an operation, rather than the rows that survived the filter at an operation. This is just a minor wording problem that might cause the reader a little confusion when reading the book. The author has addressed this issue in his errata list for the book. (8/22/14: confirmed)

* Page 24, figure 2-3 has two entries for "Store parent cursor in library cache" - the second entry should show "Store child cursor in the library cache", just as it is shown in figure 2-2 of the first edition of the book. The author has addressed this issue in his errata list for the book.

* Page 62 states, "The ALTER SESSION privilege required to execute the previous trigger can't be granted through a role. Instead, it has to be granted directly to the user executing the trigger." I believe that the session executing the AFTER LOGON trigger, by default, would not need the ALTER SESSION privilege if the user creating the AFTER LOGON trigger had the ALTER SESSION privilege because the trigger is created by default with Definer's Rights (Reference) (8/22/14: confirmed)

* Page 72 states, "disk is the number of blocks read with physical reads. Be careful--this isn't the number of physical I/O operations. If this value is larger than the number of logical reads (disk > query + current), it means that blocks spilled into the temporary tablespace." While the statement is correct, and supported by the test case output, it might be a good idea to also mention that prefetching (index or table) or buffer warm up could be another possible cause of the DISK statistic value exceeding the value of the QUERY statistic value (especially after the database is bounced or the buffer cache is flushed). The PHYSICAL READS CACHE PREFETCH and PHYSICAL READS PREFETCH WARMUP statistics might be useful for monitoring this type of access. (Reference)

* Page 73 states, "In addition to information about the first execution, version 11.2.0.2 and higher also provides the average and maximum number of rows returned over all executions. The number of executions itself is provided by the Number of plan statistics captured value." It appears that the word "executions" should have been "execution plans". (Reference) (8/22/14: confirmed)

* The book made the transition from views that require no additional cost licensing to views that require a Diagnostic Pack license on pages 114 and 115, without providing the reader a warning about the licensing requirements (such a warning is typically present in the book). (8/22/14: After discussion with the book author, determined that accessing V$METRIC, V$METRICGROUP, and V$METRICNAME does not require a Diagnostic Pack license)

* There are a couple of minor typos in the book that do not affect the accuracy of statements made by the book. For example, "... prevent it from happenning again" on page 149. Most of these typos are easily missed when reading the book. (8/22/14: confirmed)

* The book states on page 215, "This is especially true for multibyte character sets where each character might take up to three bytes." Per the Oracle Database globalization documentation, some charactersets, such as UTF-8 (AL32UTF8) may require up to four bytes per character. The author has addressed this issue in his errata list for the book.

* The book states on page 221, "For this reason, as of version 12.1, top frequency histograms and hybrid histograms replace height-balanced histograms." It appears based on the Oracle documentation that height-balanced histograms are not replaced if the histograms are created before the upgrade. Additionally, if the ESTIMATE_PERCENT parameter is specified in the DBMS_STATS call, a height-balanced histogram will be created if the number of distinct values exceeds the number of buckets. (Reference). Page 239 makes a clarifying statement, "Also note that some features (top frequency histograms, hybrid histograms, and incremental statistics) only work when dbms_stats.auto_sample_size is specified [for the ESTIMATE_PERCENT parameter]." "Work" may be a poor wording choice, "generated" may be a better choice of wording. (8/22/14: confirmed)

* The book states on page 282 about dynamic sampling level 11: "The query optimizer decides when and how to use dynamic sampling. This level is available as of version 12.1 only." Oracle Database 11.2.0.4 also adds support for dynamic sampling level 11. (Reference) (8/22/14: confirmed)

* When describing the output of DBMS_XPLAN, the book states, "Reads: The number of physical reads performed during the execution." The book should have clarified that the unit of measure for the Buffers, Reads, and Writes statistics is blocks. Pg 313 (8/22/14: confirmed)

* The book states, "Syntactical errors in hints don't raise errors. If the parser doesn't manage to parse them, they're simply considered real comments." That statement is correct for all hints except the oddly behaving IGNORE_ROW_ON_DUPKEY_INDEX hint, which will raise an "ORA-38917: IGNORE_ROW_ON_DUPKEY_INDEX hint disallowed for this operation" error, the CHANGE_DUPKEY_ERROR_INDEX hint which will raise an "ORA-38916: CHANGE_DUPKEY_ERROR_INDEX hint disallowed for this operation" error, and the RETRY_ON_ROW_CHANGE hint which will raise an "ORA-38918: RETRY_ON_ROW_CHANGE hint disallowed for this operation" error if the hints are specified incorrectly. Pg 365 (a similar comment is made at the top of page 371). (Reference) (Reference 2) (8/22/14: confirmed)

* The book states, "The aim of using a prepared statement is to share a single cursor for all SQL statements and, consequently, to avoid unnecessary hard parses by turning them into soft parses." This statement should be clarified to point out that the aim is to share a single cursor for all _similar_ SQL statements (those that would have differed only by a literal/constant if bind variables were not used). Pg 428 (8/22/14: confirmed)

* The book states, "Cursor sharing doesn't replace literal values contained in static SQL statements executed through PL/SQL. For dynamic SQL statements, the replacement takes place only when literals aren't mixed with bind variables. This isn't a bug; it's a design decision." This statement about dynamic SQL statements, at least for Oracle Database 11.2.0.2 and 12.1.0.1 (and possibly 10.2.0.2) is no longer true. The author's cursor_sharing_mix.sql script does shows literal value replacement when bind variables are also used for SQL statements executed outside PL/SQL. Pg 434 (Reference Oracle Cursor Sharing Test.txt). (8/22/14: After discussion with the book author, "dynamic" implicitly implies execution in PL/SQL (rather than ad hoc SQL statements that might be submitted from client-side tools), so the statement in the book is correct, and does match the output of the author's script. The author's script was attempting to demonstrate how executing SQL in a PL/SQL block could cause a difference in the parsing and execution of the submitted SQL statement)

---

Data Dictionary Views/Structures (the index at the back of the book misses most of these entries):
* ALL_TAB_MODIFICATIONS Pg 237
* AUX_STATS$ (SYS schema) Pgs 193, 196
* CDB_ENABLED_TRACES Pgs 58, 59, 60
* CDB_HIST_SQL_PLAN Pg 305
* CDB_OPTSTAT_OPERATIONS Pg 269
* CDB_SQL_PLAN_BASELINES Pg 409
* CDB_SQL_PROFILES Pgs 393, 399
* CDB_TAB_MODIFICATIONS Pg 237
* COL$ (SYS schema) Pg 242
* COL_USAGE$ (SYS schema) Pg 242
* DBA_ADVISOR_EXECUTIONS Pg 413
* DBA_ADVISOR_PARAMETERS Pg 413
* DBA_AUTOTASK_TASK Pg 259
* DBA_AUTOTASK_WINDOW_CLIENTS Pg 260
* DBA_ENABLED_TRACES Pgs 58, 59, 60
* DBA_HIST_ACTIVE_SESS_HISTORY Pg 420
* DBA_HIST_BASELINE Pgs 155, 156
* DBA_HIST_COLORED_SQL Pg 153
* DBA_HIST_SNAPSHOT Pg 154
* DBA_HIST_SQL_PLAN Pgs 305, 324
* DBA_HIST_SQLTEXT Pg 324
* DBA_HIST_WR_CONTROL Pg 153
* DBA_OPTSTAT_OPERATION_TASKS Pg 200
* DBA_OPTSTAT_OPERATIONS Pgs 200, 201, 269, 270
* DBA_SCHEDULER_JOBS Pgs 257-258
* DBA_SCHEDULER_PROGRAMS Pgs 258, 259
* DBA_SCHEDULER_WINDOWS Pgs 258, 260
* DBA_SCHEDULER_WINGROUP_MEMBERS Pg 258
* DBA_SQL_MANAGEMENT_CONFIG Pgs 417, 418
* DBA_SQL_PLAN_BASELINES Pgs 408, 409
* DBA_SQL_PLAN_DIR_OBJECTS Pg 230
* DBA_SQL_PLAN_DIRECTIVES Pg 230
* DBA_SQL_PROFILES Pgs 393, 399
* DBA_TAB_MODIFICATIONS Pg 237
* DBA_TAB_STAT_PREFS Pg 248
* DBA_TAB_STATS_HISTORY Pg 261
* DBA_USERS Pgs 235, 242
* GV$INSTANCE Pgs 60, 63
* OBJ$ (SYS schema) Pg 242
* OL$ (OUTLN schema) Pgs 381, 383
* OL$HINTS (OUTLN schema) Pgs 381, 383-384
* OL$NODES (OUTLN schema) Pg 381
* OPTS


TriVision NC-239WF HD 1080P IP Security Camera System with 1920 x 1280 Pixel Resolution and Facial, Car License Plate Recognition in 45 Feet and Install in 3 Steps with Our Free Dedicated Apps on iPhone, iPad, Android Smartphone and Tablet
TriVision NC-239WF HD 1080P IP Security Camera System with 1920 x 1280 Pixel Resolution and Facial, Car License Plate Recognition in 45 Feet and Install in 3 Steps with Our Free Dedicated Apps on iPhone, iPad, Android Smartphone and Tablet
Offered by TriVision Tech., LLC
Price: $348.00
5 used & new from $188.43

9 of 9 people found the following review helpful
5.0 out of 5 stars Good 1080P Indoor Camera, Significant Improvements over the TriVision NC-107WF, March 12, 2014
Verified Purchase(What's this?)
Length:: 2:59 Mins

Update September 21, 2014: If you own this camera, or one of the other TriVision 720P or 1080P cameras, I strongly suggest installing the just released 5.78B (20140916) firmware version. The new firmware version not only adds the ability to adjust the image brightness, contrast, hue, saturation, sharpness, and auto exposure target for the camera through a new Image Setup menu option, but also significantly improves the default color accuracy, contrast, and image sharpness.

Description of the Attached Video:
The video shows several video clips that were triggered by the cameras' motion capture capabilities (most of the specified motion pre-record video segment was trimmed), demonstrating the camera's ability to capture motion outdoors in daytime as well as indoors using full color or infrared night vision. The video clips were imported into the Windows Live Movie Maker application where subtitles were added, and the video was output as a 1920x1080 resolution WMV video file (15 frames per second, with a 4400kbps target - roughly the same as the original recording) with minimal video or sound quality loss. However, Amazon will likely further compress the uploaded video, degrading the original quality. The timestamp at the top-left of the video was added automatically by the cameras during recording.

I also uploaded a picture that will hopefully help people determine whether or not this camera will work for license plate recognition purposes. The camera has essentially the same viewing angle as the current TriVision NC-336PW with a 6mm lens - probably about 50-60 degrees. That lens will help improve the camera's license plate recognition capabilities, while decreasing the viewing angle. The ability to read license plates is partially dependent on lighting conditions, angle of the license plate in respect to the camera, and the distance from the camera. Reading a license plate straight-on at two car lengths is possible during the day time, while off-center recognition at that distance is more difficult.

Last December I bought a couple of the TriVision NC-240WF cameras, which are supposed to be identical to the TriVision NC-240WF cameras, other than the color. Those NC-240WF cameras arrived with firmware version 5.49 (build 20130820), which I immediately upgraded to firmware version 5.56. The TriVision NC-239WF cameras that arrived in late February arrived with firmware version 5.57 preinstalled, so the company appears to be getting cameras on the market with newer camera firmware versions fairly quickly. Firmware version 5.60 was released in mid-January, but I have not tried that firmware version in any of my cameras yet.

I bought the TriVision NC-239WF cameras to replace a couple of the TriVision NC-107WF (or NC-107W) cameras that I bought in 2012. While two of the NC-107WF cameras have been rock solid since purchase, some of the NC-107WF cameras are very unstable, regardless if they are using wireless or wired network connections (at any one time three or four of the older cameras might temporarily disappear from the Multi-Live program, sometimes requiring a power cycle to recover). In contrast, the NC-239WF cameras, much like the NC-240WF and the outdoor NC-336PW cameras, appear to be very stable on both wireless and wired network connections.

I know that people have reported in reviews of the various TriVision cameras that the cameras are difficult to set up for wireless network connections. The wireless network equipment (wireless routers, wireless access points, or other such devices) that is already in place plays a huge role in the success of setting up these cameras on wireless networks. Some wireless routers become very confused when the camera "jumps" from a wired network connection to a wireless network connection - see my review of the TriVision NC-240WF for a couple of ideas to resolve this type of problem. Wireless security using MAC address filtering can also cause headaches for the cameras - from what I understand, there is a bug in the latest Netgear firmware that is related to wireless security using MAC address filtering. Some routers with built-in wireless, such as some Cradlepoint 3G routers, have very weak signals that may require the camera to be placed within 20 feet of the router; high-end wireless access points may support this camera at distances of 200 feet or more. The camera requires a four out of five bar signal to operate correctly, although a three bar signal strength seems to be sufficient to support FTP uploading of video clips.

I had no issues setting up the NC-239WF cameras. The first camera required just under eight and a half minutes to completely set up for production use, with the camera's software already installed on my computer. In that time I changed the administrator's password, set a static IP address, configured and tested the wireless connection, configured the stream settings (note that the camera shipped with the primary stream's H.264/MPEG4 bit rate set to 2048 kbps, so I bumped that value to 4096 kbps to improve recording quality), configured the on-screen display to show the date and time, disabled the infrared LED control (so that the camera could be positioned behind a window), configured the motion detection settings so that the Threshold was at a starting value of about 25% and the Sensitivity was at about 75%, enabled the "Record on Alarm" and "Send Files in Storage to FTP Server" tasks, set the system identity for the camera, verified the time zone setting, and configured the Synology NAS to be used as the camera's time server. The configuration might seem difficult at first, but the configuration interface is fairly well organized. A typical first time buyer of this camera might need 30 to 60 minutes to completely setup the camera.

Pages 40 and 41 of the "Camera Installation for PC Only" manual that ships with the camera attempts to describe how to record pictures or video when the camera detects motion. The description of those tasks could have been written a little more clearly in the manual. You must buy a microSDHC memory card for the camera - make certain that the power cord is unplugged any time you insert or remove the memory card. Below is a partial list of the configuration settings that I typically specify for the TriVision cameras used in production:
1. Run the Camera Setup program on your computer - that program is installed from the mini-CD that ships with the camera.
2. The Camera Setup program should find your camera. Double-click your camera in the list when it is found by the Camera Setup program.
3. Click the Setting button in the web page that appears on the screen.
4. Near the right side of the web page you should see the word "Task", click that word.
5. Four options should appear, click "Task Management"
6. If you want the camera to record a picture to the memory card when the camera senses motion, put a checkmark in the box that is between the number 7 and the words "Snapshot to storage on alarm".
7. If you want the camera to record video to the memory card when the camera senses motion, put a checkmark in the box that is between the number 9 and the words "Record to storage on alarm".
8. Click the Apply button.
9. You must then configure each item that has a checkmark. For example, if you put a checkmark next to "Record to storage on alarm", click those words on the page (access the link to the settings).
10. The settings that I like to use here are as follows:
--- Record from: Primary stream
--- Post-recording time: 15 seconds
--- Split duration: 15 seconds
--- Record thumbnail: Disable
--- Record file name: type a unique name for the camera here
--- Suffix of file name: Date time
11. Click the Apply button.
12. Click the Back button.
13. If you put a checkmark next to "Snapshot to storage on alarm", then configure those settings too.
14. At the right of the web page you should see "Motion Detection", click that item. This is where you are able to control how sensitive the camera is to motion. By default the camera is likely not sensitive enough.
15. Under the "Window 1" heading, slide the Threshold setting to the left so that it is at 25% (1/4 of the distance from the left).
16. Under the "Window 1" heading, slide the Sensitivity setting to the right so that it is at 75% (3/4 of the distance from the left).
17. Click Apply.
18. Watch the blue bar that is between the Threshold and Sensitivity setting. Any time that blue bar reaches the location of the Threshold indicator, the camera will perform the tasks that you previously selected. If you find that the camera is generating too many false alarms, slide the Threshold bar to the right slightly or the Sensitivity bar to the left, then click Apply. If you find that the camera still is not sensitive enough slide the Threshold bar to the left slightly or the Sensitivity bar to the right. Don't forget to click Apply.
19. The camera will sometimes be a little slow at reacting to motion detection events, and Windows 7's "Extra Large Icons" view seems to show a preview image from roughly five seconds into a video clip. The camera can be set to start recording video three, five, or 10 seconds before it detects motion - I find that having the camera record five seconds before motion is detected to be ideal. At the right of the web page, click Camera.
20. Click Stream Setup.
21. Under the Primary stream heading, change Prerecord to "5 seconds"
22. Click Apply.

Unlike the NC-240WF cameras that I bought in December, which shipped with the wrong manuals, the NC-239WF shipped with only two manuals that were recently updated (the Mac specific manual was not included). The mini-CD included with the camera (do not attempt to put this mini-CD into a CD drive that does not have a slide-out tray) also included an updated version of the CameraLive program that now eliminates the need to use the separate Camera Setup utility to configure the camera's settings. That updated CameraLive program _still_ does not have the flexibility offered by the Multi-Live program that shipped with the TriVision cameras in 2012, which allowed quick views of 4, 9, 16, or 32 cameras simultaneously, with the ability to change the relative position of each camera in the user interface. The NC-239WF is compatible with the Multi-Live program (specify port 80 in the configuration, along with the camera's IP address and the admin user's password). The Multi-Live program also shipped with the older Y-Cam cameras, and is downloadable from the Y-Cam website (assuming that you previously bought a Y-Cam camera so that you do not run afoul of copyright laws).

The NC-239WF camera with the 5.57 firmware is ONVIF 2 compliant, as are the NC-240WF and NC-336PW cameras. That means that the camera is compatible with various video recording software and hardware solutions that work with ONVIF 2 compliant cameras. The NC-239WF also supports MPEG4, MJPEG (no audio), H.264, RTSP audio, HTTP M3U8, HTTP ASF, and JPG image (with a consistent filename). The free VideoLAN VLC Media Player and the free Apple QuickTime Player are able to decode and display many of the stream types. While the camera supports video playback of videos recorded to an optional microSDHC memory card, that type of playback is almost certain to be a frustratingly slow experience if the camera typically records a lot of motion triggered video clips. If you only have one or two cameras, power off the camera, take out the memory card, and use a computer with a memory card reader to play back the video clips. If you have more than one or two cameras, I highly recommend investing in a Synology NAS and a memory card for the camera, and then instruct the cameras to send their video clips to the Synology NAS using FTP.

Positives of the camera:
* With the 5.57 firmware installed, the camera's time seems to stay in sync with a local NTP server (feature enabled on a Synology NAS). Prior to the 5.56 firmware, the other TriVision cameras lost roughly 30 seconds per week even when set to synchronize with a local NTP server. The newer firmware also offers to automatically turn on and off daylight savings time, which was previously a manual adjustment (this fix causes problems for the Multi-Live program, which displays an incorrect date and time in the top-right corner of the video stream).

* Compatible with Internet Explorer that ships with Windows 8.1 when camera firmware version 5.57 is installed (this firmware version was preinstalled on my camera).

* Compatible with Synology Surveillance Station's (version 6.0-2719) ONVIF camera protocol and a variety of other NVR solutions when camera firmware version 5.56 is installed.

* The manuals are reasonably well written, although a couple of the sentences in the manual are written in very shaky English. The specifications in the manual are targeted for the 640x480 resolution cameras, rather than the 1920x1080 resolution cameras.

* Built in motion detection simply works. The camera supports up to four motion detection rectangular regions, each with adjustable sensitivity. This is a pixel change based motion detection feature, which is able to identify objects more than 200 feet from the camera just as easily as it is able to identify falling snow that is just a few feet from the camera. Unlike the external TriVision cameras, there is no provision for adding a PIR unit to the camera to reduce the number of false positive motion detection events.

* Works with the Multi-Live software that shipped with the older TriVision cameras, but this software does not ship with this camera (the Multi-Live program may not be fully compatible with Windows 8.1 - on one Windows 8.1 computer, one camera at random displayed a distorted picture). Multi-Live provides a quick simultaneous view of up to 36 cameras. Multi-Live shrinks the NC-240WF camera's native 16:9 aspect ratio to a 4:3 aspect ratio (or variable aspect ratio) without cropping the edges of the image.

* Works with the IP Cam Viewer (Basic) app on Android (Motorola Xoom) tablets to allow simultaneous viewing of multiple security cameras, much like the Windows based Multi-Live program - just without audio. The cameras also work with the manufacturer's recommended AnyScene app, as tested on a Motorola Xoom Android tablet. The AnyScene app, like the supplied Camera Live software, is able to remotely view a camera's live video stream, even when the camera is on a different network (no manual or automatic configuration of the router is required to use this functionality, and the feature continues to work even if the camera's IP address changes).

* The NC Setup utility, which also ships with the older cameras, works well with the new cameras to quickly locate and help configure cameras that are still using DHCP assigned IP addresses (I suggest changing all cameras to static IP addresses as soon as possible - the older TriVision NC-107WF cameras were unstable when using DHCP assigned IP addresses).

* Records video to a user installed memory card (I installed a SanDisk Ultra 32 GB MicroSDHC C10/UHS1 memory card) or a Windows compatible NAS (I have not yet had any success using Windows 7 Pro/Ultimate as a NAS for the camera) in Apple QuickTime format, just as do all of the other TriVision cameras that I have used.

* Optional camera tasks are available to schedule periodic captures of still frame JPG images and send those pictures to email servers (the feature is not compatible with all email servers, the manual recommends Google's Gmail, which I successfully tested with an NC-316W and NC-336PW camera), FTP servers, HTTP web servers (not tested), and to storage (either the configured memory card or a NAS). Additional optional tasks allow sending one or more still frame JPG images to the same destinations when motion is detected. Recording video to an FTP server requires the configuration of two tasks in the camera setup, "Record to storage on alarm" and "Send files in storage to FTP server".

* Core functionality of the camera allows the camera to minimize wireless (or wired when using the integrated 100mbps RJ45 connection) network traffic caused by the camera - the camera does not need to continuously broadcast its video feed to a digital video recorder device for the video with motion to be captured (although the camera can continuously broadcast its video stream with almost no configuration). The camera is capable of asynchronously uploading video stored on the memory card (or a configured NAS) to an FTP server (I use a Synology NAS with the FTP service enabled). With the two task configuration, the camera will continue to record motion triggered video to the internal memory card if the FTP server is unavailable. Video may be uploaded to a NAS or Windows share using a single configured task in the camera configuration; however, the camera will not revert back to using an installed memory card when the NAS or Windows share is unavailable.

* The built-in microphone's sound quality (the sound test is at the end of the attached video) seems to be reasonably good with some white noise filtering, and there was no noticeable electronic noise caused by the camera's wireless hardware as has been reported with Y-Cam's Cube camera line.

* The camera supports streaming playback of video stored on the optional memory card, allowing quick views of the video. The older TriVision cameras required a much more time consuming process to view video stored on the camera's memory card, a process which downloaded the entire video clip before playback could begin. The recorded video will play back using either the QuickTime control or the TriVision ActiveX control (it seems that the camera tries to force the use of the QuickTime control even when using Internet Explorer to view the videos).

* Video uploaded to an FTP server or NAS is stored in a single folder (directory) on the server, which allows quick review of the video uploaded by multiple cameras throughout the day. Y-Cam's 1080P outdoor bullet camera, in contrast, uploads video into a nested directory storage structure of \ Year \ Month \ Day \ Hour \ Minute - that nested directory structure makes it impossible to quickly review video uploaded by one or more cameras; I do not know if the same inconvenient storage structure is used with Y-Cam's Cube camera line.

* Connects wirelessly to 802.11b/g/n WEP and WPA2 encrypted networks even when the network SSID is not being broadcast (tested with a Cisco Linksys E2000 router acting as access point, and an EnGenius ENH202 wireless access point).

* Offers two-level user access security to the camera for administrators and regular users. The current release of the Camera Live software only uses the admin user's login. I understand that a future release of the Camera Live software will work for either a privileged account (admin) or a regular user account, and Camera Live will be able to configure the camera, thus eliminating the need to use the Camera Setup utility.

* The motion-triggered recorded video duration is configurable from 5 to 86,400 seconds, and that recorded video may be automatically broken up into multiple video clips ranging from 10 seconds to 1,200 seconds. A 15 second "Post-recording time" and 15 second "Split duration" combined with a 5 second pre-record (configurable in the Stream setup to record 0, 3, 5, or 10 seconds before the motion detection event) is ideal for video clips that are previewed with Windows 7's Windows Explorer's "Extra Large Icons" view, allowing the motion triggering event to appear in the Windows generated thumbnail.

* The mounting stand is sturdy and reasonably easy to repoint using a quarter or half dollar coin.

* The power supply included with the camera has a very long cord (roughly eight to ten feet, or possibly 3 meters) which helps with installation. The power supply's 12 volt connector plugs directly into the back of the camera, as does the included network cable.

* Automatic light intensity adjustment during the daytime, automatically switching to black and white night vision with infrared lighting when necessary. It is possible to disable the camera's infrared lights (necessary when the camera is pointed through a window) and have the camera be sensitive to or immune from stand-alone infrared lights that are on the other side of the window.

---

Negatives:
* The company still does not have a functional website; the user manual mentions the ability to download firmware updates from the company's website. Based on what has been posted on Amazon by a TriVision tech support person, the website is supposed to be available by the end of March 2014.

* The camera's time resets to the year 1969 when power is lost, and is restored to the correct time and date once a network time source is reachable. A local Synology NAS can be configured as the network time source for the camera so that the correct time appears on the camera in the event that a power outage knocks out the Internet connection.

* The 1080P video quality is not comparable with that produced by a $30,000 network TV camera. While I did not find that the camera ever paused for a second while recording, there is still a bit of a "flip-frame" feel to the video. As an economical security camera, the generated day time video is fine. The night time infra-red illuminated video is also fine in enclosed spaces at a distance of 10 to 25 feet. With the camera pointed into an empty area at night with nothing within roughly 30 feet of the camera, the video becomes very grainy, similar to the video recorded by the outdoor NC-336PW.
---

On Windows 7 the recorded video clips will show in the Extra Large Icons view with a black frame around the edges - that black frame blocks a portion of the video preview. That black frame can be disabled by importing the following information into the Windows registry, and then rebooting the computer (save the text shown between the --------- markers in a text file with a .reg extension and then double-click the file to import the settings).
---------
Windows Registry Editor Version 5.00

[HKEY_CLASSES_ROOT\SystemFileAssociations\video]
"ThumbnailCutoff"=dword:00000001
"Treatment"=dword:00000000
---------

As I mentioned above, I recommend using the camera with a Synology NAS, instructing the camera to send its video clips to the NAS using FTP, if you intend to buy two or more cameras. Below is information that I previously posted on Amazon about selecting a Synology NAS:
The least expensive Synology NAS option appears to be a DS112j (roughly $160) plus the cost of a hard drive. I have used this NAS with a Western Digital Red 2TB hard drive as an FTP destination for a couple of TriVision cameras. You can save about $30 by going with a 1TB drive, but you will be kicking yourself for going cheap once you start adding camera #3 and #4 (in a pinch, if you have an old hard drive sitting around, you might be able to use it short-term in the NAS so that you do not need to immediately buy a hard drive):
DS112j: http://www.amazon.com/gp/product/B007KWLXRK/
WD Red 2TB: http://www.amazon.com/WD-Red-NAS-Hard-Drive

In general with the Synology devices, models with a "j", "se", "air", or "slim" suffix are the low cost versions with slow CPUs, no internal fan (so they probably need to be in an air conditioned environment in summer time), have the lowest electricity operating cost, and tend to have the lower limits regarding the number of "active" connections. The 64 "Max Concurrent CIFS/AFP/FTP Connections" sounds like a very high number when you only have four cameras, but if one or two cameras have a weak wireless signal, the Synology will remember connections (and count those connections) to the camera long after the cameras has forgotten about the connections. Each computer that is set up to access the location on the Synology where the videos are stored will count as an additional connection, even if that computer is not actively looking at the videos. If you set up a second location for computer backups on the Synology, that potentially further decreases the available connections by 1 to 3 per computer.

The Synology models without a suffix (DS114, DS214, DS414, etc.) typically have faster CPUs than the models with the "j" suffix, and typically support a higher number of "Max Concurrent CIFS/AFP/FTP Connections", meaning that you are less likely to need to reboot a NAS because the cameras suddenly stopped sending videos to the NAS (the videos will be stored on the memory cards that you installed until the Synology is accessible again). The faster CPU means that the NAS is able to read from and write to the hard drive(s) faster, and also handle other tasks in the background (for instance, I have some of my Synology units monitoring computers, servers, printers, cameras, and other devices, with the Synology sending an email to me if a device stops responding). There are a lot of free add-on software packages for the Synology NAS units, and if any of those free add-on packages appear to be interesting, you probably should pick a NAS with a fast CPU. The Synology models in this group may have "passive" cooling, meaning that there is no internal fan. Some of the units have built-in USB3 ports (don't bother trying to add an external hard drive to a NAS with only USB2 ports, it will be very slow), which will allow you to attach an external Western Digital (or other brand) hard drive with a USB3 port for additional storage space.

The Synology models with a "+" indicator have faster CPUs, so they are able to more easily handle the free add-on packages in addition to receiving videos from your cameras via FTP; the two drive unit uses a faster version of the Marvell Armada that is used on the less expensive Synology NAS units. The four drive and larger units use variants of the Intel CPU architecture, which allows the NAS access to additional features such as the Plex Media Server ( http://blog.synology.com/blog/?p=1012 ) and the Oracle Instant Client (which allows requesting information from an Oracle database). The "+" models have "active" cooling, meaning that they have one or more internal fans, allowing them to more easily survive in environments without air conditioning.

The number that immediately follows the "DS" in the name usually describes the (maximum) number of hard drives that the unit supports. A DS112j supports one hard drive, a DS214+ supports two hard drives, a DS412+ supports four hard drives, a DS1813+ supports eight internal drives and up to 10 additional drives in optional expansion units. The number at the end of the model name refers to the model year of the release - a new unit released in September might carry the following year's date in the model name. Usually, the higher that year indicator, the faster the NAS is, and the more (RAM) memory is installed. The DS1812+ and the DS412+ both have the same CPU and the amount of (RAM) memory installed. The DS1813+ carries essentially the same CPU with twice as much memory, and uses a 64 bit operating system rather than a 32 bit operating system, allowing the memory to be expanded to 4GB rather than 3GB.

Most of the throughput statistics (network transfer speeds) stated on the manufacturer's website assume that you have a gigabit network. A gigabit network port is capable of transferring up to roughly 112MB to 115MB per second. Any claim on Synology's website, such as the 208MB per second read speed of the DS214+ is plain and simple a distortion of the truth. Achieving that speed requires a compatible network switch that is properly configured, and it requires that two or more client computers are simultaneously trying to read information from the NAS - a single computer will never see 208MB per second from the NAS.

A Synology NAS should work fine with fewer than the maximum number of hard drives installed. If you have only non-essential information on the NAS (video clips from the cameras probably fall in this category), then installing a single hard drive into the NAS is probably fine. If you plan to store information on the NAS that is important, then you should install multiples of two identical drives (2, 4, 6, 8) into the NAS and tell the NAS to set up a RAID 1 array (in the case of two hard drives) or a RAID 10 array (in the case of more than two hard drives). This configuration creates an internal "backup" of all information stored in the NAS, making it less likely to lose everything if one of the hard drives in the NAS stops working correctly. This also means that you will lose half of the total capacity of the installed hard drives. For example, if you install four 3TB hard drives, you will only receive a "nominal" 6TB of actual disk space.

If you plan to stop at four cameras and only intend to use the NAS with the cameras, a DS112j will be sufficient. My DS112j was showing roughly 50% CPU utilization with two of the four cameras (NC-326PW 720P) continuously sending video clips via FTP, while I was connected to the NAS on a computer watching the video clips arrive. With the memory card arrangement in the cameras, if the NAS becomes a little slow the cameras do not care too much - they will continue to write new video clips to the memory card and continue to simultaneously send older video clips to the FTP server as fast as the FTP server and network will permit. I initially planned to have only eight 640x480 cameras sending video to my Synology DS212+ NAS. That was nearly two years ago. I now have significantly more cameras connected to the DS212+ (perhaps twice as many), and most of those are now high definition 1080P cameras. That DS212+ has no trouble keeping up with a typical day's set of videos from the cameras. However, at night time when it is snowing, and every external camera is sending a continuous flood of video clips to the DS212+, the NAS' CPU approaches 100%, and it becomes nearly impossible for my Windows 7 computer to generate the thumbnail previews of the video clips on the NAS (so that I can determine which video clips should be viewed) because the NAS is so slow to respond. When this happens, I generally wait until the next day to review the video clips.
Comment Comments (5) | Permalink | Most recent comment: Sep 21, 2014 1:55 PM PDT


Pro Exchange Server 2013 Administration (Expert's Voice in Exchange)
Pro Exchange Server 2013 Administration (Expert's Voice in Exchange)
by Jaap Wesselius
Edition: Paperback
Price: $47.99
47 used & new from $41.70

10 of 10 people found the following review helpful
5.0 out of 5 stars Sufficient Detail to Perform Exchange Conversions and Administer Exchange 2013, March 9, 2014
Verified Purchase(What's this?)
I started looking at replacing Exchange Server 2007 a bit over a year ago, and was quite surprised to see books on the market describing Exchange Server 2013 all the way back in December 2012. Exchange Server 2013 was released for general availability on December 3, 2012, so some of those books that I looked at the end of 2012 likely jumped the gun a bit, with reviews that more or less state as much.

This book was released in December 2013, just in time for Exchange Server 2013 SP 1, released February 24, 2014, to possibly cause a few of the book's statement of facts to no longer hold true. The text of the book appears to cover through Exchange Server cumulative roll up 2. A web search for information about Exchange Server 2013 SP 1 revealed that the author is maintaining a set of articles that address Exchange Server 2013 roll up 3 and service pack 1, so the book's contents are effectively being updated by the author. The Apress website has a fairly extensive downloadable script collection for this book.

In addition to buying this book, I also purchased the "Mastering Microsoft Exchange Server 2013" book. As a matter of comparison, this book seems to be a little better organized, with fewer hidden nuggets of information that will be missed by skim-reading the book. This book has a significantly higher number of grammar, spelling, and word substitution problems than the Mastering book, that if it were not for the content in the last 75 pages of the book, I would have deducted one or two stars from this book's rating. The grammar problems seem to be mostly concentrated in the first half of the book, while the spelling errors, mostly simple typos that should have been caught by a spelling checker or book editor, continued throughout the book. This book has a much more thorough troubleshooting section than the Mastering book, including a potential solution for iOS devices that cause a heavy load on the Exchange Server's CPUs. In addition to covering the built-in troubleshooting functionality such as reviewing the event logs, queue viewer, and the Microsoft autodiscover tool, this book also described:
* Using the IIS logs to troubleshoot Autodiscover (page 116)
* VSSADMIN tool to list the VSS writers on the server (page 258)
* Troubleshooting with the TELNET client (page 354)
* NSLOOKUP (page 355)
* Microsoft Remote Connectivity Analyzer (page 363)
* Google Apps Toolbox (page 372)
* Performance Analysis of Logs (page 381)
* Log Parser (page 382)
* Log Parser Studio (page 384)

I recorded five pages of typewritten notes from this book, compared to just three and a half pages from the Mastering book, which is a bit surprising considering that the Mastering book is about twice as long. Some of the interesting items from my notes:
* It is necessary to "touch" the address list from the Exchange 2013 server Pg 101: Get-AddressList | Set-AddressList

* It is possible to use the IIS logs to help troubleshoot Auodiscover. Pg 116

* Exchange uses an ESE (JET) database, like Active Directory, WINS and DHCP. Pg 131. The ESE database is JET Blue, while an Access database is JET Red.

* In Exchange 2013, Exchange no longer supports Single Instance Storage, where a large email attachment sent to several people would only be stored once in the database (as was the case in prior Exchange versions) - such an attachment is now stored once for each recipient. Pg 137

* Example of creating mailbox enabled users using a CSV file. Pg 195

* Pipe Exchange Management Shell output to ConvertTo-HTML to generate command output in HTML format, then use > to redirect that output to a file. Pg 201

* Exchange 2013 has a "self-healing" feature called Managed Availability, where Exchange is constantly monitoring its health and attempts to take appropriate action when the health monitoring indicates a problem. Pg 305 (Side note: this monitoring leaves behind a mess in the Journal mailboxes.)

* Example of creating a Powershell HTML formatted email that indicates what is in the SMTP queue. Pg 333

* Change the ActiveSync policy to Discretionary if iPhone and iPad users generate too much load on the Exchange Server, see page 341 for the script. While not mentioned in the book, this is a significant potential problem, as reported in various Internet forum posts. See Microsoft knowledgebase article 2563324 for a potential solution that does not affect the ActiveSync policy setting.

Errors and Concerns So Far (I might update this list later):
* The first quarter of the book seemed to use an unnecessary number of half page or three quarter page screen captures.

* An abnormal number of typos are in the book that probably should have been identified by a spelling and grammar checker, the author, or a book editor. The grammar problems are concentrated mostly in the first half of the book. Some examples:
--- Page 62 and page 65 instruct the reader to execute the same command.
--- On page 82 (and at least one other page) the author used the phrase "green-field installation" without defining that phrase.
--- On page 146 (and at least one other page) the author used the word "beamer" when describing resource mailboxes. Using the word "projector" might have reduced a bit of potential confusion.
--- On page 218 the book states that the default database quota setting to prohibit send and receive is 2.1GB. The actual default is 2.3GB.
--- On page 290, the book has a typo, "Outlook 2007 Professioanl [sic] Plus"
--- On page 296, the book has an accidental word substitution, "... and these passive clients are not used by any clients; they are used for redundancy purposes."
--- On page 299, the book has a typo, "Use the Browse butten [sic] to select..."
--- On page 314, the book has a word substitution that could cause a configuration problem, "A more realistic approach is to use a 24GB requirement [of RAM memory] per active mailbox when a usage profile of 100 messages per day is anticipated."
--- On page 314, GB is used to describe a processor characteristic, "If a 4GB processor core server is used, you end up with a 10GB memory requirement for an Exchange CAS-only server."
--- On page 322, the book has a typo, "You will most likely see unexptected [sic] results..."

I am a hands-on IT manager, so I performed some brief testing of Exchange Server 2013 SP 1. Below are some of my notes from that testing:
* While a lot of books (including this one) likely state that there should be no firewall between the Exchange Server and the domain controllers, installing Exchange Server 2013 on a new Windows Server 2012 machine will fail if the local firewall on the Exchange Server server is disabled, likely failing with an error stating that the Remote Registry service may not be running. Not only will the install fail, but the installer may start the Windows firewall, thus blocking all connections to the server; in the first trial the Intel teaming 802.3ad network configuration was also corrupted, meaning that connections to the server still failed even after the Windows firewall was disabled. (Side note: on Windows Server 2012 it is also not possible to share out a printer unless the local firewall on the server is enabled.)

* This book (on page 50) is correct that a SMTP send connector had to be created for a mailbox on Exchange 2013 to send an email to a mailbox on a different Exchange Server in the same site, at least when that mailbox is located on Exchange Server 2007. The messages remained in a SMTP queue until the SMTP send connector was created. It appears that page 615 of the "Mastering Microsoft Exchange Server 2013" book may be incorrect, the following is a paraphrase of that other book: "Exchange 2013 has built-in send connectors for Exchange Servers in the same site - these built-in connectors will not show in the user interface."

* There is a bug in Exchange Server 2013 SP 1 that affects third-party transport agents, which likely impacts any virus scanners that are used on the server. See Microsoft knowledgebase article 2938053 for a patch.

* A third party virus scanner may also require CGI scripting support in IIS, and .Net 3.5 SP 1, so it might be beneficial to install those features while installing the other requirements for Exchange Server. The Windows Server 2012 installation media includes .Net 3.5 SP 1.

* The user interface seems to indicate that the built-in anti-malware is still active even though that feature was turned off during installation. A user-interface unchangeable checkbox contains a checkmark indicating that anti-malware is enabled. To disable the built-in anti-malware, if a third party virus scanner is used, execute the Disable-AntimalwareScanning.ps1 script, then restart the MSExchangeTransport service.

* The book states (on page 116 and probably a couple of other pages) that when the Exchange 2013 Client Access Server is installed, none of the virtual directories (Autodiscover, OWA, ECP, etc.) are configured with an Internal URL or an External URL. With a fresh install of Exchange Server 2013 SP 1 I found that all of the internal virtual directories appeared to be configured, but none of the external virtual directories were configured. If the Internal URLs were also pre-set during a non-SP1 install of Exchange Server 2013, then the book is incorrect on this item.

* On or around page 98, the book states that a message will appear in Outlook when that user's mailbox move completes, if the user is running Outlook during the move operation. The book also stated that Outlook 2013 does not need to restart when the mailbox move completes (Edit: correction, the book stated that if the mailbox was already on an _Exchange Server 2013_ server and the mailbox is moved to another 2013 server, the user does not need to restart Outlook). Based on my testing with Exchange Server 2013 SP1, after a mailbox is moved from Exchange 2007 to Exchange 2013, an Outlook 2007 or 2010 client will be prompted to restart Outlook when Outlook is started for the first time after the mailbox move completes (so the message still appears even if Outlook was not running during the move). Outlook 2013, which was running on a client computer at the time of the mailbox move, displayed a message stating that Outlook had to be restarted due to a change made by the Exchange administrator. After restarting Outlook 2013, the following message appeared: "Cannot start Microsoft Outlook. Cannot open the Outlook window. The set of folders cannot be opened. Microsoft Exchange is not available. Either there are network problems or the Exchange server is down for maintenance." (and re-appeared every time Outlook was restarted). Examining the Outlook profile indicated that the old Exchange 2007 server was still specified, rather than being updated as expected. Deleting and recreating the Outlook profile on that computer fixed the problem.

* Moving email accounts from Exchange Server 2007 to Exchange Server 2013 may cause a new replacement SMTP email address to be created that conforms with the standard Email Address Policy. For example, if the person's name is Robert Jones and the Email Address Policy is configured to use the first letter of the first name and the full last name, the person's SMTP address on the old Exchange Server 2007 would begin with RJones followed by the domain name. If at some point in the past the person's display name was changed from Robert Jones to Bob Jones, the person's SMTP address would be changed to begin with BJones when the account is moved to Exchange Server 2013 (the former SMTP address will also be listed for the user, with smtp in lowercase letters, but would not used for incoming and outgoing emails). The problem is worse for special email accounts were an entry was only specified in the first name field on the old server - in those cases the new generated SMTP address for the email account would become a single letter after the move to Exchange Server 2013.

* The GUI user interface for Exchange Server 2013 SP 1 now allows changing the internal and external URLs for the various virtual directories by clicking Servers at the left, and Virtual Directories at the right. Both books stated that the internal URLs could not be changed through the GUI interface, so this difference might be a change for SP 1.

* Good luck trying to use self-signed certificates with Exchange Server 2013 - doing so just might be possible.

---

Overall, this is a helpful book, but the book does not have all of the answers. I generally prefer to always have a second opinion (or corroborating evidence) on facts found in this and other books, so picking up a second companion Exchange Server 2013 book might be very beneficial.
Comment Comments (2) | Permalink | Most recent comment: Jun 4, 2014 12:11 PM PDT


Mastering Microsoft Exchange Server 2013
Mastering Microsoft Exchange Server 2013
by David Elfassy
Edition: Paperback
Price: $37.99
54 used & new from $26.25

15 of 16 people found the following review helpful
5.0 out of 5 stars Extensively Detailed Book that Might Benefit from a Couple of Additional Simplified Bulleted Lists, February 28, 2014
Verified Purchase(What's this?)
Roughly a year ago I started planning the replacement of the company’s Windows Server 2003 servers with Windows Server 2012 servers, and started preparing for the Exchange Server 2007 to Exchange Server 2013 upgrade. At the time, the finalized version of Exchange Server 2013 was not yet available from Microsoft, yet there were already books on the market that explained how to install and configure Exchange Server 2013. Some of the reviews attached to those books were quite critical, suggesting that it was better to wait for other books to be released. Service Pack 1 for Exchange Server 2013 was just released on February 24, 2014, so I suppose even this book might need a couple of small edits for accuracy once the Service Pack is installed.

I am a hands-on IT manager that has worked with various versions of Microsoft Exchange over the last 14 years. I do not have a significant amount of expertise with Exchange, however I planned and performed the company’s migration from Exchange 5.5 in an NT 4 domain to Exchange 2007 in a Windows Server 2003 R2 Active Directory domain roughly seven years ago – that migration required a brief couple of hours early in the morning where the mailboxes existed in Exchange 2003.

Fortunately for those companies already using Exchange 2007, the transition from Exchange 2007 to Exchange 2013 should be much more simple, at least based on the information found in the “Mastering Microsoft Exchange 2013” book. Just install Exchange 2007 SP3 and then Rollup 10, and then install Exchange 2013 Cumulative Update 2, redirect AutoDiscover and other virtual directories to the Exchange 2013 server, move the mailboxes, uninstall Exchange 2007, and unplug the old server. For the most part the book is extensively detailed, yet there are still small details that are left out of the book. For example:
* What should be done at 3AM when the installation of Exchange 2007 SP3 fails because some obscure item could not be automatically uninstalled, thereby leaving Exchange 2007 down and inaccessible to end users?

* What needs to be done to the client computers so that the Exchange 2013 self-signed certificate that is valid for five years may be used rather than having to obtain a certificate from a well-known certificate authority?

* Will any sort of prompt appear in an end-user’s Outlook program after that user’s mailbox is moved?

* What are the options for a client computer that is running either Windows XP or Windows Vista and Outlook 2003? If the Outlook Web Access supports HTML5, will it gracefully degrade in web browsers that do not support HTML5?

* What if a backup program other than Windows Server Backup is utilized, are such backups valid?

* What other items might need to be considered before uninstalling Exchange 2007, such as: reconfiguring and bouncing the Oracle databases; modifying the PBX phone system’s SMTP settings; adjusting the SMTP settings of all of the printers and fax machines; locating and replacing all programs that depend on Exchange’s MAPI protocol; replacing all of the BlackBerry phones or upgrading to a newer version of BES; changing and recompiling all of the mail enabled custom programs that use hard-coded IP addresses or names for the email servers; potentially waiting for all non-active directory joined computers to try to access the old server and then be redirected to the moved mailbox on the new server; etc.

The Mastering Microsoft Exchange book is already a 700+ page book, long enough that after reading the book cover to cover, one might question whether or not the book contained sufficient information to carry out the Exchange task at hand – unless good notes with page number references were recorded while reading the book. I suspect that answering the above questions might have extended the book another 300 pages. Hidden inside the book’s pages are gems of information, such as:
* There is no need to install the release to manufacturing version of Exchange 2012 on a new server, just install the latest Cumulative Update (or the Service Pack that was just released a couple of days ago).

* If possible, do not configure a CNAME or SRV record because the Outlook client computers will then display a warning message when started.

* 8GB is the minimum amount of memory for Exchange 2013 (Side note: 8GB was not enough memory to keep Exchange 2007 happy, why would that amount of memory be sufficient for Exchange 2013?), while the author recommends considering the purchase of a CPU with 8 cores.

* The information store service must be restarted any time a new database is created; existing databases should not exceed 200GB except in rare circumstances.

* After a mailbox move an Outlook Web user might not be able to reconnect to Exchange for 15 minutes.

* Devices using ActiveSync should connect to the Exchange 2013 Client Access Server, which will then connect to the Exchange 2013 mailbox server role. If the user’s mailbox is on Exchange 2007, then the Exchange 2013 mailbox role connects to the Exchange 2007 Client Access Server, which then connects to the user’s 2007 mailbox server role.

* Exchange 2013 has built-in send connectors for Exchange Servers in the same site – these built-in connectors will not show in the user interface. This tip disagrees with page 50 of the “Pro Exchange Server 2013 Administration” which states, “An Exchange server is by default not able to send messages to any other server. To achieve this function, however, a send connector has to be created.” It might be interesting to determine which book is correct.

The book has a very detailed chapter that explains how to work in PowerShell. On a side note, I am still left wondering why Microsoft did not incorporate GUI equivalents of all PowerShell commands that are required to properly configure Exchange 2013; or the counterpoint of why force an Exchange Administrator to install the full GUI version of Windows and use the GUI Exchange interface if it is possible to perform the entire Exchange configuration using cryptic PowerShelll commands that will be used once and forgotten.

The book makes effective use of screen captures, including those screen captures only when doing so improves reader comprehension. The diagrams, for example the Exchange data and transaction logs diagram on page 10, and the memory allocation diagram on page 206, sometimes lack sufficient explanation to help the reader understand concepts without the reader first spending a lot of time trying to decode the author’s intention for the diagram.

Errors or Concerns Identified So Far (I might update this list later):
* Page 73 of the book states about RAID 5 arrays, “These arrays are suitable for use with legacy Exchange Server mailbox databases on smaller servers, depending on the type of data and the performance of the array.” RAID 5 should not be used anywhere near any modern servers, it is simply not a safe enough option due to the increasing hard drive capacities and increasingly long rebuild times when a hard drive must be replaced. Using the manufacturer’s unrecoverable read error (URE) statistics for some Seagate and Western Digital drives (1 in 10E14 (1 in 100,000,000,000,000) bits read), the probability of encountering an URE when replacing one drive during an eight drive RAID 5 rebuild (with 3TB drives), resulting in a lost RAID 5 array, is roughly 76.98% (calculated as (1 - (99,999,999,999,999 / 100,000,000,000,000) ^ 147,000,000,000,000)).

* The book is missing an _ character in front of autodiscover when describing the requirements for a DNS SRV record on page 156.

* There was no mention that Outlook clients will have to close and restart Outlook after a mailbox is moved. The book simply states on page 361, “When the move is complete, the Active Directory attributes are updated, the old mailbox on the source database is deleted, and the new mailbox is activated… Client accesses to this mailbox will now be directed to the new mailbox database.”

* Pages 655 through 664 of the book did not state whether or not the new Data Loss Prevention functionality requires additional licenses beyond the standard Exchange CALs. DLP requires Enterprise CALs, per an Internet search.

Overall, this is a very good and thorough book, and a few errors from a book this size is not only acceptable, but probably expected. The “Pro Exchange Server 2013 Administration” book probably does a better job at outlining the various steps required to accomplish a task before diving into the details of each of the steps in a given task. That said, this book probably does a better job at providing the various filler details that are required for a successful deployment. However, some of those “gem” filler details mentioned previously in this review are difficult to located when skim reading the book, for example when trying to determine how to kill that unwelcome security warning that appears whenever a VPN connected non-domain-joined client starts Outlook.
Comment Comment (1) | Permalink | Most recent comment: Sep 8, 2014 1:18 PM PDT


Fleck 48k 9000SXT dual tank water softener 48,000 grain with 9000 SXT digital metered valve
Fleck 48k 9000SXT dual tank water softener 48,000 grain with 9000 SXT digital metered valve
Offered by ABC Water Equipment
Price: $1,417.46
2 used & new from $1,398.97

4.0 out of 5 stars Good Quality Product, Chinese Sourced Resin, January 2, 2014
I ordered this item two months ago to replace a roughly 15 year old twin tank water softener that completely failed earlier this year (no longer picking up salt brine, and continuous water flowing to the drain). While the 48,000 grain capacity of this unit is not needed given the current monthly water usage, the old water softener was a smaller capacity unit that seemed to have difficulty controlling calcium and lime deposits from the well water.

The instructions that shipped with the unit were insufficient for compete installation – only the manual for the Fleck control head was included. I downloaded and printed the PDF installation manual for the unit found on the Abundant Flow Water website. The PDF manual was well written, and almost sufficient for a successful installation on the first attempt. The PDF manual did not mention the large O ring that was sitting loosely in the control head’s box. That O ring was overlooked during the first installation attempt, resulting in an unstoppable dripping leak roughly 1/4 inch above the tank to which the control head was mounted. The manual that shipped with the control head showed the O ring in one of the control head assembly diagrams, so the water softener control head had to be removed from the tank to install the O ring. Once reassembled, there was still a dripping leak roughly 1/4 inch above the tank to which the control head was mounted. Removing the control head again from the tank, it was determined that the plastic ring that is supposed to hold the O ring in place never locked into position, even though the plastic ring was rotated clockwise several times after the O ring was installed. Applying downward pressure to the plastic ring while spinning the plastic ring clockwise finally locked the O ring into the correct location, and after the third assembly of the water softener, the water softener no longer leaked roughly 1/4 inch above the tank to which the control head was mounted. I believe that this O ring was intended to be preinstalled and locked into position at the Fleck factory, which would explain why the installation of that O ring was not covered in the PDF installation manual.

The PDF installation manual mentioned that a hot water heater installation kit, plumber’s pipe joint compound, and Teflon tape would be needed. The hot water heater installation kit that I bought at Lowe’s included the Teflon tape and all of the necessary fittings to retrofit the copper pipe connections used by the old water softener to connect to the new softener. After installation I put the water softener through three back to back regeneration cycles to clean any contaminants out of the water softener system (the 48,000 unit included gravel for the tanks), on the theory that the final regeneration cycle would use clean soft water from the previously cleaned second tank. The water still had a bit of a plastic or charcoal taste unit the fourth regeneration cycle was run automatically two weeks later. The water produced by the softener is now free of undesirable odors/tastes, and calcium and lime deposits seem to no longer be a problem. Before installation, a TDS meter indicated that the PPM count was at 505, and that number increased slightly (as expected) after the water softener was installed, but the softened water does not have a salty taste.

The control head seems to be a high quality U.S. made unit, although the constant warnings in the PDF installation manual about not over tightening the screws in the plastic parts made me a little curious about the long time durability of the unit. The twin tanks are made of very lightweight materials, which again made me question the long time durability of the unit. The brine tank that holds the salt is a tall rectangular shape rather than round, and seems to be of good quality. The included Fleck bypass valve was stamped “made in China”, as were the bags of resin.

Overall, I am very happy with the performance of the water softener. That said, the O ring issue and the made in China resin (rather than U.S. sourced higher quality resin) are negatives of this product. The product shipped in several easily managed boxes. I ordered this unit from the selling company’s website to take advantage of free shipping offered on the company’s website.


Page: 1 | 2 | 3 | 4 | 5