收藏 分享(赏)

a model for online consumer health information quality.pdf

上传人:weiwoduzun 文档编号:1752120 上传时间:2018-08-22 格式:PDF 页数:11 大小:307.29KB
下载 相关 举报
a model for online consumer health information quality.pdf_第1页
第1页 / 共11页
a model for online consumer health information quality.pdf_第2页
第2页 / 共11页
a model for online consumer health information quality.pdf_第3页
第3页 / 共11页
a model for online consumer health information quality.pdf_第4页
第4页 / 共11页
a model for online consumer health information quality.pdf_第5页
第5页 / 共11页
点击查看更多>>
资源描述

1、A Model for Online Consumer Health Information QualityBesiki Stvilia, Lorri Mon, andYong JeongYiCollege of Information, Florida State University, 101 Louis Shores Building, Tallahassee, FL.E-mail: bstvilia, lmon, yjy4617fsu.eduThis article describes a model for online consumer healthinformation cons

2、isting of five quality criteria constructs.These constructs are grounded in empirical data from theperspectives of the three main sources in the communi-cation process:health information providers,consumers,and intermediaries, such as Web directory creators andlibrarians, who assist consumers in fin

3、ding healthcareinformation. The article also defines five constructs ofWeb page structural markers that could be used in infor-mation quality evaluation and maps these markers tothe quality criteria. Findings from correlation analysisand multinomial logistic tests indicate that use of thestructural

4、markers depended significantly on the typeof Web page and type of information provider. The find-ings suggest the need to define genre-specific tem-plates for quality evaluation and the need to developmodels for an automatic genre-based classification ofhealth information Web pages. In addition, the

5、 studyshowed that consumers may lack the motivation or lit-eracy skills to evaluate the information quality of healthWeb pages, which suggests the need to develop acces-sible automatic information quality evaluation tools andontologies.IntroductionA widely used general definition of information qual

6、-ity (IQ) is the informations “fitness for use” (Wang Stvilia, Gasser,Twidale, Stvilia Institute of Medicine, 1999).The Web is an important source for people who are seek-ing healthcare information (Hesse et al., 2005). The PewReceived March 10, 2009; revised April 9, 2009; accepted April 9, 2009 20

7、09 ASIS low-income patients, who may not have health insurance; andrural patients, who live far away from healthcare facilitiesand may have to resort to self-care. The need for streamlin-ing and easing the task of self-care for patients with chronicconditions has been recognized both by the governme

8、nt andin academia, where technologies have been proposed thatinclude adaptive online questionnaires and wireless sensordevices to help in monitoring patients health parameters orreminding patients of routine procedures (e.g., taking drugs)to avoid preventable complications that may result in emer-ge

9、ncy care (Harris, Wathen, Sanders, Berlin, Eysenbach, Powell, Kuss, Hardey, 2001). A survey of doctors on patient use of Internethealth information found that the doctors estimated 44% oftheir patients had health problems because of using Internetinformation, whereas 85% of their patients were estim

10、atedto have benefited from online health information (Potts IQ ontology(s); and eval-uation and monitoring services. A number of conceptual IQcriteria sets have been proposed in the general IQ literature.For example, using an empirical approach (a user survey),Wang and Strong (1996) developed a taxo

11、nomy of IQ dimen-sions grouped into four categories: (a) Intrinsic: Accuracy,Objectivity, Believability, and Reputation; (b) Accessibility:Access, Security; (c) Contextual: Relevancy, Value-Added,Timeliness, Completeness, Appropriate Amount of Data;(d) Representational: Interpretability, Ease of Und

12、erstand-ing, Representational Consistency, Concise Representation.In healthcare informatics, Charnock, Shepperd, Needham,and Gann (1999) used a similar approach to develop an IQassessment instrument or questionnaire by having healthcaredomain experts develop a set of questions for a questionnairediv

13、ided into three sections: Reliability, Coverage, and Over-all Quality.After reviewing 79 empirical studies of consumerhealth information on theWeb, Eysenbach et al. (2002) foundAccuracy, Completeness, Readability, Design, Disclosures,and References as the most frequently cited quality criteria.To op

14、erationalize conceptual IQ models and criteria effec-tively through questionnaires or metrics, not only is access tothe information itself needed, but also the metadata of the pro-cesses of its creation, maintenance, and use (Stvilia, Gasser,Twidale, Smith, 2007). Often, access to the behind-the-sce

15、nesmetadata and policy information is not available, and mem-bers of the government and healthcare community haverecognized this problem.The goal of the Healthy People 2010Information Access Project at the U.S. Department of Healthand Human Services (DHHS, 2007) has been to increase theproportion of

16、 health-related Web sites that disclose informa-tion that can be used for assessing their quality. The projectidentified six properties or types of metadata essential forcarrying out an IQ evaluation of a health Web site: (a) theidentity of owners, developers, and sponsors; (b) the purposeof the sit

17、e; (c) the sources of the content; (d) the privacyand confidentiality of personal information; (e) evaluation orfeedback mechanisms, and (f) content update procedures. Wefound it interesting that a survey of 102 Web sites conductedby the same project found that none of the healthcare Websites provid

18、ed all this information, and less than 4% of theWeb sites disclosed the sources of their content and how itwas updated.Likewise, prior research has revealed inconsistencies inhow healthcare consumers evaluate the quality of onlineinformation. For example, Fox (2006) found that 75% ofhealth informati

19、on seekers did not consistently check onlinehealthcare information for basic IQ indicators, such as thepublication date or the source of the information. A labo-ratory experiment by Eysenbach and Khler (2002) foundthat although users described a Web sites source, profes-sional design, formal or offi

20、cial appearance, language, andease of use as the criteria they used to evaluate the quality ofhealthcare Web pages, observations in an actual informationretrieval experiment indicated that none of the users actuallyexamined Web pages for these quality cues. These findingspoint to a possible trade-of

21、f between quality and cost in termsof the time spent by users in evaluating quality, and theyindicate the contextual nature of quality evaluation. Earlierstudies showed that the same information could be evaluateddifferently in different circumstances and by members of dif-ferent age and social grou

22、ps (Fox Stvilia (b) the record of past performanceor encounters (Process Based); and (c) social institutionsand intermediaries (Institution Based). Bailey, Gurak, andKonstan (2001) built on the model proposed in the literatureby Zucker and others by synthesizing a taxonomy of trustdimensions and sou

23、rces. The sources of trust they establishedincluded Presumptions, Surface Inspections, Experience, andThird-Party Institutions. The dimensions of trust, on the otherhand, comprised Attraction, Dynamism, Expertness, Faith,Intentions, and Localness.Effects of online trust-building mechanisms on the ov

24、eralltrustworthiness of a Web site might differ in different circum-stances. Chang and Cheung (2005) showed that a third-partycertification was the most effective way of increasing the trustof consumers in an online vendor when the vendors reputa-tion was unknown ahead of time. At the same time, ano

25、thertrust-building mechanism, the return policy, had a signifi-cant effect on the vendors trustworthiness only when theconsumer was aware of the vendors reputation. By infer-ence, this might suggest that the effects of trust-buildingmechanisms may differ with the type of online informationprovider;

26、consequently, different providers (e.g., the govern-ment, commercial sites) might use different cues to conveytrust to consumers.The utility of Web page trust markers and third-partyendorsements was investigated in the context of using Websites to answer library “ready reference” questions. Frickand

27、 Fallis (2004) examined the relationships between somequality indicators (e.g., a copyright sign, citations, a lackof advertising) and Web page accuracy. They found thatalthough the presence of a copyright sign and the currency ofa Web page correlated positively with accuracy, the rest of thequality

28、 indicators did not. Furthermore, their analysis showedthat most of the sampled Web pages contained accurateanswers to ready reference questions. In an earlier study, thesame authors examined the use of quality indicators in healthWeb pages and found that displaying the Health on the Net(HON) Code l

29、ogo, having an organization domain, and dis-playing a copyright were positively correlated with accuracy(Fallis consumers of health information, their health questions,and their perceptions of quality indicators; and intermedi-aries such as librarians and Web directory creators, whosecriteria for IQ

30、 are applied in evaluating and selecting healthWeb sources. In particular, the study aimed to address thefollowing research questions: What are the “virtues” or criteria considered to be importantwhen evaluating the quality of healthcare information? What are the quality markers that providers may u

31、se to signalIQ, and are these markers related? What are some of the types of healthcare information Webpages and providers? Does the use of quality markers vary with the type of Webpage and the type of provider?ProceduresThe study used a mixed methodology with multiple datasources. In particular, th

32、e researchers analyzed the healthcareinformatics literature to identify the types of activities that usehealthcare information, the types of IQ problems, and the setsof quality criteria, markers, and metrics. The findings of theliterature analysis were combined with the IQ criteria set fromthe gener

33、al framework of IQ measurement developed earlier(Stvilia, Gasser, Twidale, Wang Yoshioka, Herman,Yates, http:/ipl.org),an online digital library question-answering (Q DHHS = U.S. Department of Health and HumanServices; HON = Health On the Net.Examples of the community-defined quality model wereWikip

34、edia pages. It is important to note that at the timethis article was written, Wikipedia did not have a sepa-rate IQ model for its healthcare-related articles, but ratherused a general model (see Figure 1). A detailed descriptionof Wikipedias IQ assurance model can be found elsewhere(Stvilia, Twidale

35、 et al., 2008).Finally, some of the Web pages carried seals of approvalfrom third-party rating agencies as a sign of adherence totheir quality principles. The most frequently occurring sealwas from the Health On the Net Foundation (HON). TheHON principles (the HON Code) comprised seven generalprinci

36、ples, which then are further detailed into specific oper-ationalization guidelines. In addition, the HON Code qualityguidelines contain sections for both “closed” (centrally con-trolled) and “open” (collaborative or community-based) Websites.To identify the healthcare IQ criteria used by consumersan

37、d information intermediaries, the researchers content ana-lyzed a sample of the IPLs QMSA = 0.812) among the criteria. A scree plot suggestedselecting the first five components. In addition, because ofthe sample size (80 participants), the cutoff size for the crite-ria loadings on the factors was se

38、t to 0.65 (Hair, Black, Babin,Anderson, see Table 3).The criteria loaded on the first factor were mostly accessrelated (see Table 3). The second factor construct includedusefulness criteria. The third factor construct had both accu-racy and trust-related criteria, with Accuracy criteria havingthe hi

39、ghest loading. The fourth factor had a single criterionAuthority. The criteria loaded on the firth factor constructcould be categorized as related to Completeness. The criteriaconstructs were then ranked by the averages of their loadingrankings (see Table 4). The Accuracy construct was rankedthe hig

40、hest, followed by the Completeness construct.The analysis of the literature and the content analysis ofthe Yahoo! Directory sample suggested 23 document com-ponents or markers that could be used in IQ evaluation (seeTable 5). In addition, the analysis identified seven types orgenres of Web pages: Ar

41、ticle, Blog, Directory, Factsheet,Instrument, Mainpage, and Q rotationmethod: varimax with Kaiser normalization.TABLE 4. IQ criteria constructs.IQ criteriaconstructs Ranking IQ criteriaAccuracy 4.41 Accuracy, credibility, reliabilityCompleteness 4.17 Completeness, clarityAuthority 3.8 AuthorityUsefu

42、lness 3.75 Ease of use, objectivity, utilityAccessibility 3.57 Accessibility, cohesiveness, consistency,volatilitytype included charities, associations, professional organiza-tions, and societies. The research type covered both researchinstitutions and individual project-based Web sites.The differen

43、t types of Web pages and informationproviders seemed to use different quality markers. A non-parametric Kruskal-Wallis test showed that the presence orabsence of the majority of quality markers in the sample wassignificantly dependent on both the type of document and thetype of provider (see Table 5

44、). The quality markers that weresignificantly related to the Web page types in the previous testwere then regressed on theWeb page types using multinomiallogistic regression (model fit likelihood ratio: 2= 217.62;p0.0001). The Article type was used as a baseline. Theregression analysis confirmed tha

45、t the markers were statisti-cally significant in distinguishing some of theWeb page typesfrom each other. For instance, the presence of the Copyrightmarker was a negative predictor of the Directory and Fact-sheet types over the Article type. In addition, holding all theother variables constant, havi

46、ng the Disclaimer componentincreased the odds of the Web page being of the Factsheet1786 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGYSeptember 2009DOI: 10.1002/asiTABLE 5. Kruskal-Wallis correlation test of the quality markers on thedocument and provider types (150 cases).D

47、ocument type Provider typeDocument marker 2df 2df About 19.70 6 0.003 6.24 4 0.182Accessibility 10.22 6 0.116 66.51 4 0.000Advertising policy 11.89 6 0.064 11.70 4 0.020Author affiliation 14.35 6 0.026 6.51 4 0.164Author credentials 11.08 6 0.086 21.21 4 0.000Author name 12.48 6 0.052 20.74 4 0.000C

48、ontact us 18.72 6 0.005 6.85 4 0.144Copyright 12.16 6 0.058 17.06 4 0.002Date of creation 11.37 6 0.078 3.60 4 0.463Date of last update 23.87 6 0.001 12.76 4 0.013Disclaimer 20.11 6 0.003 9.43 4 0.051Editorial review process 15.61 6 0.016 2.50 4 0.644Formal IQ criteria 49.37 6 0.000 60.90 4 0.000Pay

49、ment 4.66 6 0.588 3.06 4 0.548Privacy policy 18.66 6 0.005 16.21 4 0.003Provider name 16.65 6 0.011 64.75 4 0.000Quality guidelines 24.93 6 0.000 47.05 4 0.000Reference(s) 52.02 6 0.000 5.85 4 0.211Search box 39.88 6 0.000 17.95 4 0.001Site map 12.73 6 0.048 15.49 4 0.004Sponsored content 4.90 6 0.557 3.72 4 0.445Statement of purpose or 7.18 6 0.304 6.38 4 0.173mission statementThird-party quality seal 17.78 6 0.007 23.13 4 0.000Terms of use, policies, 19.17 6 0.004 10.38 4 0.035and regulationstype rather than the Main Page type. A similar regressionanalysis of the qu

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 企业管理 > 经营企划

本站链接:文库   一言   我酷   合作


客服QQ:2549714901微博号:道客多多官方知乎号:道客多多

经营许可证编号: 粤ICP备2021046453号世界地图

道客多多©版权所有2020-2025营业执照举报