TRANSFER AGENT "QUALITY SURVEYS": USEFUL TOOLS... OR SOMETHING THATS "NOT GONNA GIVE YOU NO ANSWER"? |
A few months ago a reader called to say shed been asked to participate in a telephone survey about her satisfaction with her transfer agent, being conducted by a Chicagoarea firm called RMB. "Do you know them?" she asked. "I answered their survey, but they didnt seem to ask very good questions, and I told them so," she said. So, when we got a few moments, we called directory assistance, introduced ourselves to the gruffsounding fellow who answered "RBM", told him about our interest in following such surveys for our newsletter, and asked if we could ask a few questions about his firm and the survey they are taking. "Youre pulling my leg," he said, and I assured him we were not. "Were just trying to get a little background about your company, about the principals of the firm, and about the survey itself." Somehow this touched a nerve: "Im not gonna talk to you," he said. "Were a service company an Im not gonna give you no answer" he said, and slammed down the phone. Very soon thereafter, summaries of two other surveys passed before our eyes earlier versions of which have been reviewed here before. The first survey, entitled "The 1996 Transfer Agent Comparison Report" is a product of Staten Islandbased Stockholder Consulting Services, Inc. It was an impressively thick document, with a lot of disclaimers and explanations up front (which is as it should be), lots of tabs and cross tabs of the 39 questions were asked, and with a sample questionnaire at the end. The second was a press release from Group Five, a Skillman, NJ firm, headed "GROUP FIVE reports findings from 1996 Shareowners Services Corporate Satisfaction Study: The Bank of New York and Norwest Bank Achieve Highest Ratings Overall Satisfaction of Corporate Clients Declines." A brief summary of the findings followed. "Lets cast a real critical eye over these two surveys," we thought to ourselves. "Lets see how the surveys compare and also how the transfer agents seem to compare from survey to survey. But most important, lets see what kind of answers the two surveys might give us, to the kinds of questions a smart buyer would really want to ask about transfer agents." Our first step is usually to look at the survey size: SCI had 239 responses on 19 transfer agents. Not too terrible, we thought at first, but oops that only averages 15 clients per agent. And of the companies that were invited to respond (3,348) only 8.6% of them did so despite the promise of maybe winning a prize. Since the Optimizer had just published fresh numbers on market share, we thought, "lets take a quick look at what percentage of the customer base responded for a few of the bigger agents." We went straight to question 39, re: "Overall Satisfaction with your Transfer Agent," on the basis of which SCI awarded its top prizes. Bank of New York received a "middlin mark" of 4.0. But the score was based on a mere 10 responses (2.6% of their customer base by our reckoning. Norwest had the same score of 4.0, on the basis of only 8 responses. "Hey," we thought, "werent these middlin guys the "winners" of the Group Five survey? First Chicago was SCIs numberone among the large transfer agents with a score of 4.25. This was based on 33 responses, a fairly respectable 8.8% of their client base. But on Group Fives chart of "Overall Performance," First Chicago was number four. UMB, which was back again as the winner of SCIs midsize T/A Award had a whopping five responses. And UMB didnt even make the Group Five list, despite the fact that G5s sample size was more than twice as large! Boston Equiserve, which was number 2 among the large agents on the SCI list, was number 6 on Group Fives. One things seemed crystal clear from this mishmash: the rankings sure dont match up. SCI published the standard deviations and confidence indexes and these are precisely the measure one needs to say how confident you can be in the reported results. But guess what: when you apply their standard deviations, lets say to ChaseMellon, which fared poorly, you discover that on the based of 38 responses (a mere 1.6% of their customer base) their score on Question 39 could (theoretically, from a statisticians theoretical point of view) be as low as 2.42 ("poor" wed say) or as high as 4.48, which wed call "great." In fact, from a statisticians point of view, every single one of the SCI entrants could be the real winner. Put another way, because the sample sizes are so small and the standard deviations are, consequently, all so large there is no statistically significant difference among them that can be deduced from this survey. Another major flaw in the SCI survey caught our eye: Among the most sensitive and important decisions a professional survey designer has to make are the "descriptors" that are used to explain the numerical grading scale (a) to the people who are giving the grades and (b) to the buyer of the survey, who wants, one assumes, to interpret the results in an unambiguous manner. In the SCI "guidelines to understanding the comparison report" we read that "5=Outstanding, 4=Above Average, 3=Average, 2=Below Average and 1=Poor." How could a respondent, who normally has only one transfer agent, possibly know whether theirs was "above average," we wondered. But then, we went to the questionnaire itself. After each question it simply gave a line of numbers from 5 to 1 plus a "NA" column and instructed "CIRCLE ONE." Not a single indication as to whether #1 was meant to be "best" or "outstanding" or whether #5 was. So all one can accurately say about this survey, in our opinion, is that "various people who use various but unknown kinds of stock transfer agents gave various numerical scores to various questions about various agents and that their marks on each question were averaged and that you can read the numbers for yourself!!! Now lets take a quick look at the Group Five study, "the most comprehensive independent study of its type". It had 610 companies responding, or about 10% of all investment grade companies, by our reckoning. Sounds pretty good so far. So on to the next logical question: On the basis of Group Fives larger sample size, should we believe their headline, that "Bank of New York and Norwest Bank Achieve Highest Ratings" instead of congratulating SCIs big and small winners (1st Chicago/UMB)? Back to Group Fives chart for the sample size on each agent but guess what catches our eye? The headline doesnt seem to be right! Norwest has the highest rating, for sure; an 89, whatever that means. But American Stock Transfer is second highest, with an 83. Bank of New York comes in "highest," with a thirdplace 80, only because Group Five decides this year to divide the field into "Large" and "Medium" Commercial Agents. Whats the justification for such a division one might wonder? Are large agents deserving of a bit more slack than medium ones? Do medium agents serve different kinds of clients, or different kinds of shareholders than large ones do? Finally, we asked, "what about those standard deviations and confidence indexes that might tell us whether theres any statistically significant difference between Group Fives highest and lowest scores?" Group Five didnt provide them in their summary, but they did say this: "In theory, in 95 cases out of 100, overall results based on such samples will differ by no more than three percentage points in either direction from what would have been obtained by asking all companies using commercial transfer agent services. The potential sampling error for smaller subgroups is greater." And here, dear surveyors, dear readers is what we consider to be the fatal flaw in both these surveys: Standard deviations and confidence measures are only valid if one assumes theres a "normal distribution," i.e. a fairly homogeneous universe of companies; one thats essentially the same, regardless of which agents population one samples. We contend that this is simply not the case. So lets go back to the original questions: what do we really want to know about transfer agents and what, if anything, do these surveys tell us? Suppose youre a company with 50,000 stockholders, 25,000 active DRP participants, and 10,000 optionees, all with options nearing the strike price. Do these surveys tell you how many companies with needs like yours are in the survey or at any of the agents you may want check on? Do they tell you how satisfied those particular kinds of companies may be, compared, lets say, to a given agents nondividend paying clients or to clients with 500,000 stockholders or more? We think not. |
![]() ![]() |
At Bong Shop Australia, we pride ourselves on being your go-to destination for top-quality bongs that elevate your smoking experience. Our curated selection features innovative designs and premium materials, ensuring you find the perfect piece that matches your style and preferences. Explore our offerings at https://bongsuppliesau.com and discover why we are the trusted choice for enthusiasts across Australia. Experience unparalleled satisfaction with every hit and embrace the artistry of smoking.