ResearchGate.net and Your Professional Identity

ResearchGate.net is a very popular platform among researchers for advancing their reputations and often for sharing copies of their papers, whether allowed by publishers or not. It is the product of a start-up company that has received financing from some of the main funders of ways to provide open access to the results of funded research such as the Wellcome Trust, as recently described in this article in TechCrunch.

It is widely used by people who want to be known for their scholarship and research.  Frequent updates via email about people using or wanting your papers, or new papers by your co-authors of past papers, are either welcome or annoying, depending on your viewpoint at the time! 

The system prompts researchers to upload the PDF of articles, and frequently authors are not aware of that publishers forbid such sharing via the author contract. However, many do, and ResearchGate.net is a major source of full text found via GoogleScholar.

Dr. Hamid R. Jamali did a study about full text found in ResearchGate.net, titled “Copyright compliance and infringement in ResearchGate full-text journal articles”, which was published by Springer in Scientometrics in February 2017, and is available for a fee. The author’s version of this article is posted in ResearchGate.net, which is allowed by the author’s contract with Springer.  He found that 51.3% of the articles in the study should not have been posted on ResearchGate.net based on publishers’ policies. 

We discuss appropriate use of a variety of tools, such as ResearchGate.net, for advancing your professional identity in our workshops on managing your professional or research identity, and are always happy to consult on questions regarding use of services like ResearchGate.net. 

Barbara DeFelice and Jen Green

Scholarly Communication, Copyright and Publishing Program at Dartmouth

 

 

 

Posted in Uncategorized | Tagged , , , , | Leave a comment

Open Data Day

Image provided by opendataday.org

SPARC (the Scholarly Publishing and Academic Resources Coalition) released their member update this morning and shared the news that they have officially joined the Data Coalition, an organization that “advocates on behalf of the private sector and the public interest for the publication of government information as standardized, machine-readable data.”  The Data Coalition also organizes and supports data advocacy events, such as DataRefuge, which are popping up all around the country, including New England (e.g., DataRescue Boston at MIT on February 18). Although librarians and archivists have worked diligently across many Presidents’ Administrations to protect and ensure access to government data, last week’s Fair Use Week discussions pointed out that their concern over government data is more pronounced right now.  DataRefuge is a project originally scoped to rescue climate and environmental data, but librarians via the Libraries Network have recently connected to include protection and preservation of more types of born-digital government data. These events and activities are in support of the DATA Act and the OPEN Government Data Act, established and passed in 2014 and 2015 respectively.  

The current prominence of DataRefuge and the news that SPARC has joined Data Coalition is timely since this Saturday, March 4th, is International Open Data Day. This is a day dedicated to engaging researchers, students, and fellow librarians in creating, using, and reusing Open Data. One of SPARC’s first actions as Data Coalition members will be to co-host two events during Open Data Day. The first is a two-day, global Open Data ‘do-a-thon’, which will be co-hosted with the U.S. National Institutes of Health. This event is based around a face-to-face gathering in London, but you can join remotely to participate in discussions before, during, and after the event. Later in Washington DC, SPARC, along with the Sunlight Foundation and Center for Open Data Enterprise, will co-host afternoon discussions with leaders from government and civil society about Open Data.  You are welcome to join in those discussions.

SPARC will likely post Open Data Day activities on their Twitter feed, and they are an organization that I’d recommend following if you have an interest in open access, open educational resource, and open data.  You can follow on SPARC @SPARC_NA. Dartmouth is proud to be a SPARC member.

 

Posted in Uncategorized | Tagged , , , , | Leave a comment

Fair Use Week 2017

fairuse

Modified from the Fair Use Week Infographic, http://fairuseweek.org/resources/

Last week was brought to us by the “Love Your Data Week” celebration, and we learned through a series of posts how to better provide for the care and feeding of our data. This week is brought to us by the “Fair Use/Fair Dealing Week” celebration, where we revisit the significance of fair use in research, learning, work, and life.

What is fair use and how does it impact you? 

Fair use is a legal exception within copyright law that allows a person to use copyrighted materials without permission for specific circumstances.  You or I can make a fair use determination at any day or time, but the factors that must be considered are the same as if a judge were making a determination in a court of law.  These factors are:

  1. the purpose and character of the use
  2. the nature of the copyrighted work
  3. the amount and substantiality of the portion taken, and
  4. the effect of the use upon the potential market

Stanford has a great resource to review the definitions and details of fair use: http://fairuse.stanford.edu/overview/fair-use/four-factors/

Academics often think of fair use within the context of teaching, research, and scholarship in that fair use allows them to use portions of copyrighted materials for the purpose of teaching concepts within their classrooms and incorporating critical works of others that help support new innovations, creativity, and ideas. But, there are other circumstances where fair use applies within everyday life. Circumstances that enrich us culturally, intellectually, socially, and personally. Some examples of ways that copyrighted content might be used under the umbrella of fair use are:

  • reporting the news
  • making fun of the news through parody
  • making art from someone else’s art
  • reproducing a book in large print or braille

One infographic from fairuse.org helps illustrate this point:

Fair Use Examples

Fair Use Week Website at http://fairuseweek.org/wp-content/uploads/2016/02/ARL-FUW-Infographic-r5.pdf

..and here is what fair use looks like in a day in the life of a college student…

Fair use in a day in the life of a college student

Fair use in a day in the life of a college student http://fairuseweek.org/

If you are new to the concept of fair use, it can be complicated to understand and difficult to determine whether fair use applies to your specific need.  Barbara and I teach multiple workshops through the academic year on fair use.  We also visit individually with students, faculty, and staff to help them make a fair use determination.  If you have questions about fair use or other copyright issues, please don’t hesitate to contact us!

Posted in Uncategorized | Tagged , , , | 1 Comment

Love Your Data Week Feb 13th – 17th 2017

February 17th: Rescuing Unloved Data

Post authored by Lora Leligdon

lyd_friday

We are wrapping up Love Your Data week with rescuing unloved data.

As always, please join in the conversation on Twitter (#LYD17 #loveyourdata) or share your insights on Facebook (#LYD17 #loveyourdata).

And while today is the last day of our event, there is still time to register for workshops on data management at Dartmouth. Starting on February 20th, the library will host six data management workshops exploring different stages of the research data life cycle, including data management planning, cleaning, visualizing, storing, sharing, and preserving. Please visit dartgo.org/data_management_workshops for more information and to register to attend.

Our daily blog posts are courtesy of the 2017 LYD Week Planning Committee. Learn more at https://loveyourdata.wordpress.com/lydw-2017/!

“Data that is mobile, visible and well-loved stands a better chance of surviving” ~ Kurt Bollacker

Things to consider:

Legacy, heritage and at-risk data share one common theme: barrier to access. Data that has been recorded by hand (field notes, lab notebooks, handwritten transcripts, measurements or ledgers) or on outdated technology or using proprietary formats are at risk.

Securing legacy data takes time, resources, and expertise but is well worth the effort as old data can enable new research and the loss of data could impede future research. So how to approach reviving legacy or at-risk data?

How do you eat an elephant? One bite at a time.

  1. Recover and inventory the data
    • Format, type
    • Accompanying material–codebooks, notes, marginalia
  2. Organize the data
    • Depending on discipline/subject: date, variable, content/subject
  3. Assess the data
    • Are there any gaps or missing information?
    • Triage–consider nature of data along with ease of recovery
  4. Describe the data
    • Assign metadata at the collection/file level
  5. Digitize/normalize the data:
    • Digitization is not preservation. Choose a file format that will retain its functionality (and accessibility!) over time: “Which file formats should I use?”
  6. Review
    • Confirm there are no gaps or indicate where gaps exist
  7. Deposit and disseminate
    • Make the data open and available for re-use

Stories

Resources

That’s a wrap for our Love Your Data week posts on data quality!  Thanks for reading along, and we hope you’ve learned to love your data. 

If you have any questions on data management, please contact Lora Leligdon.

Posted in Uncategorized | Tagged , , , , | Leave a comment

Love Your Data Week Feb 13th – 17th 2017

February 16th: Finding the Right Data

Post authored by Lora Leligdon

lyd_thursday

Thursday bring us to finding the right data for your project.

Need help finding the right data? Check out the Library’s Research Guides on data or contact your subject librarian for personal assistance.

To find the right data, have a clear question and locate quality data sources.

Things to consider

In a 2004 Science Daily News article, the National Science Foundation used the phrase “here there be data” to highlight the exploratory nature of traversing the “untamed” scientific data landscape. The use of that phrase harkens to older maps of the world where unexplored territories or areas on maps bore the warning ‘here, there be [insert mythical/fantastical creatures]’ to alert explorers of the dangers of the unknown. While the research data landscape is (slightly) less foreboding, there’s still an adventurous quality to looking for research data.

Stories

Resources

  1. Formulate a question

The data you find is only as good as the question you ask. Think of the age-old “who, what, where, when” criteria when putting together a question – specifying these elements helps to narrow the map of data available and can help direct where to look!

  • WHO (population)
  • WHAT (subject, discipline)
  • WHERE (location, place)
  • WHEN (longitudinal, snapshot)

This page from Michigan State University Libraries’ “How to find data & statistics” guide does a great job of further articulating these key elements to forming a question and putting together a data search strategy.

  1. Locate data source(s)

After you’ve identified the question, you can begin the scavenger hunt that is locating relevant source(s) of research data. One way to find data is to think about what organization, industry, discipline, etc. might gather and/or disseminate data relevant to your question.

Thinking about your source can also help with evaluating whether or not you have relevant, quality data to use.

  • There is an increasing number of city or statewide data portals – some examples: New York City, Hawaii, and Illinois – that provide access to regional data on everything from traffic patterns to restaurant inspection results.

Check out this post from Nathan Yau, data viz whiz and creator of FlowingData — his post includes some of the sources listed above, but also highlights tips like scraping data from websites and using APIs to access data.

  1. Cite accordingly 

The ability to reuse data is only as good as its quality; the ability to find relevant data is only possible if it’s discoverable. As a producer of data, that means following many of the practices articulated in earlier posts. As a consumer of data, that means being a good researcher and citing your data sources.

In general, citing data follows the same template as any other citation — include information such as author, title, year of publication, edition/version, and persistent identifier (e.g., Digital Object Identifier, Uniform Resource Name). Check with your data source as well – they may provide guidance on how they want to be cited!

See DataONE and ICPSR pages on data citation for examples and more guidance.

Activities

BYODM — build your own (research) data map! Ask yourself:

  • What data sources are most relevant to my research?
  • Are there relevant data sets generated or held locally that I have access to?
  • What information do I need to retrace my steps back to these data (e.g., contact information, URLs, etc.)?

Where have you found the right data? Join us on Twitter or Facebook (#LYD17 #loveyourdata) to share your stories! Our daily blog posts are courtesy of the 2017 LYD Week Planning Committee. Learn more at https://loveyourdata.wordpress.com/lydw-2017/!

Tomorrow we are going to wrap up the week with rescuing unloved data.

Posted in Uncategorized | Tagged , , , | Leave a comment

Love Your Data Week Feb 13th – 17th 2017

February 15th: Good Data Examples

Post authored by Lora Leligdon

lyd_wednesday

Day three of Love Your Data week brings us to some examples of good data! What are good data?

Good data are FAIR – Findable, Accessible, Interoperable, Re-usable

Things to consider:

What makes data good?

  • Data has to be readable and well-documented enough for others (and a future you) to understand.
  • Data has to be findable to keep it from being lost. Information scientists have started to call such data FAIR — Findable, Accessible, Interoperable, Re-usable. One of the most important things you can do to keep your data FAIR is to deposit it in a trusted digital repository. Do not use your personal website as your data archive.
  • Tidy data are good data. Messy data are hard to work with.
  • Data quality is a process, starting with planning through to curation of the data for deposit.

Stories

Example: This dataset is still around and usable more than 50 years after the data were collected and more than 40 years after it was last used in a publication.

Counterexample: This article: http://www.sciencedirect.com/science/article/pii/S1751157709000881 promises:

“Statistical scripts and the raw dataset are included as supplemental data and are also available at http://www.researchremix.org.”

Alas:

picture5

(Used by recommendation of the author who has long since become enlightened. The data have made it into a trusted repository too.)

Hadley Wickham tells you how to tidy your data: http://vita.had.co.nz/papers/tidy-data.pdf

Resources

Example: Data can take many forms. This compilation of “Morale and Intelligence Reports” collected by the UK Government during and after the war is a great example of qualitative historical data: https://discover.ukdataservice.ac.uk/catalogue/?sn=7465

Activities

  • Want to learn more? Register and attend a Dartmouth research data management workshop to learn more about planning, cleaning, visualizing, storing, sharing, and preserving your data at Dartmouth.
  • What is your favorite data set? How/why is it good for your project? Try out the FAIR Principles to describe and share examples of good data for your discipline. Tell us on Twitter or Facebook (#LYD 2017 #loveyourdata)

Our daily blog posts are courtesy of the 2017 LYD Week Planning Committee. Learn more at https://loveyourdata.wordpress.com/lydw-2017/!

We’re getting close to the end of our quality data posts! But stay tuned – tomorrow we will be discussing how to find the right data for your project.

Posted in Uncategorized | Tagged , , , | Leave a comment

Love Your Data Week Feb 13th – 17th 2017

February 14th: Documenting, Describing, Defining

Post authored by Lora Leligdon

lyd_tuesday

For the second day of Love Your Data week, we will be discussing good data documentation!

Good documentation tells people they can trust your data by enabling validation, replication, and reuse.

Things to consider:

Why does having good documentation matter?

  • It contributes to the quality and usefulness of your research and the data itself – for yourself, colleagues, students, and others.
  • It makes the analysis and write-up stages of your project easier and less stressful.
  • It helps your teammates, colleagues, and students understand and build on your work.
  • It helps to build trust in your research by allowing others to validate your data or methods.
  • It can help you answer questions about your work during pre-publication peer review and after publication.
  • It can make it easier for others to replicate or reuse your data. When they cite the data, you get credit! Include these citations in your CV, funding proposal, or promotion and tenure package.
  • It improves the integrity of the scholarly record by providing a more complete picture of how your research was conducted. This promotes public trust and support of research!
  • Some communities and fields have been talking about documentation for decades and have well-developed standards for documentation (e.g., geospatial data, clinical data, etc.), while others do not (e.g., psychology, education, engineering, etc.). No matter where your research community or field falls in this spectrum, you can start improving your documentation today!

Stories (learn from others’ mistakes and successes)

Resources

Practical Tips by data type & format

General Resources

Activities

  • Want to learn more? Attend the upcoming Dartmouth workshops on data management to learn hands-on approaches to ensuring quality data.
  • Check out some of the documentation guidelines and standards out there. What can you borrow or learn from them to improve your own documentation?
  • Join the conversation on Twitter at (#LYD17 #loveyourdata) or share your insights on Facebook (#LYD17 #loveyourdata)

Stay tuned… tomorrow we will be providing good data examples!

Our daily blog posts are courtesy of the 2017 LYD Week Planning Committee. Learn more at https://loveyourdata.wordpress.com/lydw-2017/!

Posted in Uncategorized | Tagged , , | Leave a comment

Love Your Data Week Feb 13th – 17th 2017

February 13th: Defining Data Quality

Post authored by Lora Leligdon

lyd_monday

Welcome to Love Your Data week! Each day this week we will be blogging, tweeting, and sharing practical tips, resources, and stories to help you adopt good data practices. Up first, know your data quality!

Data quality is the degree to which data meets the purposes and requirements of its use. Depending on the uses, good quality data may refer to complete, accurate, credible, consistent or “good enough” data.

Things to consider:

What is data quality and how can we distinguish between good and bad data? How are the issues of data quality being addressed in various disciplines?

  • Data quality refers to the quality of content (values) in one’s data set. For example, if a data set contains names and addresses of customers, all names and addresses have to be recorded (data is complete) and correspond to the actual names and addresses (data is accurate), and all records have to be up-to-date (data is current).
  • The most common characteristics of data quality include completeness, validity, consistency, timeliness, and accuracy. Additionally, data has to be useful (fit for purpose), documented, and reproducible/verifiable.
  • At least four activities impact the quality of data: modeling the world (deciding what to collect and how), collecting or generating data, storage/access, and formatting/transformation.
  • Assessing data quality requires disciplinary knowledge and is time-consuming.
  • Data quality issues: how to measure, how to track lineage of data (provenance), when data is “good enough”, what happens when data is mixed and triangulated (esp. high quality and low quality data), and crowdsourcing for quality.
  • Data quality is responsibility of both data providers and data curators: data providers ensure the quality of their individual data sets, while curators help the community with consistency, coverage, and metadata.

“Care and Quality are internal and external aspects of the same thing. A person who sees Quality and feels it as he works is a person who cares. A person who cares about what he sees and does is a person who’s bound to have some characteristic of quality.”

― Robert M. Pirsig, Zen and the Art of Motorcycle Maintenance: An Inquiry Into Values

Stories

Resources

Activities

  • Want to learn more? Attend the upcoming Dartmouth workshops “Data Management Planning with the DMPTool” and “Data Cleaning with OpenRefine and R” to learn hands-on approaches to ensuring quality data.
  • Use criteria for good data (e.g., completeness, accuracy, fitness for use, documentation) to assess where your data stands.
  • Discuss your approaches to data collection and measures you took/could take to ensure integrity and completeness of your data.
  • Discuss steps to address missing or incomplete data in the context of your research. Does it matter? How much missing data affects validity, reliability or trustworthiness of your conclusions?

Remember to join our conversation on Twitter (#LYD17 #loveyourdata) or share your insights on Facebook (#LYD17 #loveyourdata). Up tomorrow…. Documenting, Describing, and Defining your data.

Our daily blog posts are courtesy of the 2017 LYD Week Planning Committee. Learn more at https://loveyourdata.wordpress.com/lydw-2017/!

Posted in Uncategorized | Tagged , , | Leave a comment

Love Your Data Week Feb 13th – 17th 2017

February 9th: Love Your Data!

Post authored by Lora Leligdon

picture1

Next week is Love Your Data week, an international event to help researchers take better care of their data. This year’s theme is emphasizing data quality for researchers during any stage in their career.  

Similar to Open Access Week, the purpose of the Love Your Data (LYD) campaign is to raise awareness and build a community to engage with topics related to research data management, sharing, preservation, reuse, and library-based research data services. We believe research data are the foundation of the scholarly record and crucial for advancing our knowledge of the world around us. To celebrate, every day next week, we will blogging, tweeting, and sharing practical tips, resources, and stories to help you learn good data practices. 

Interested in learning more about research data management? The Library is pleased to announce a workshop series aimed at expanding your data best practices. Starting on February 20th, we will host six data management workshops exploring different stages of the research data life cycle, including data management planning, data cleaning, visualizing, storing, sharing, and preserving. Please visit dartgo.org/data_management_workshops for more information and to register to attend. Check out the Research Data Management Guide for information on DMPs, public access requirements, and more.

Please join our conversation on Twitter (#LYD17 #loveyourdata) or share your insights on Facebook (#LYD17 #loveyourdata).

Special thanks to the 2017 National LYD Week Planning Committee for organizing this week and sharing their amazing resources! Check out their work at https://loveyourdata.wordpress.com/lydw-2017/!

Questions? Please contact Lora Leligdon or Jen Green for more information.

Posted in Uncategorized | Tagged , , , , | Leave a comment

Public Access and Federal Agencies: Staying the Course

Dartmouth’s Scholarly Communication, Copyright and Publishing Program actively supports researchers and scholars to fulfill the requirements from funding agencies, whether federal government or private foundations, to make the results of funded research publicly available to the taxpayers and other stakeholders responsible for the funding. With the dramatic changes in the U.S. federal government agencies, some have wondered about the fate of these public access requirements.  It is important to note that the support for tax payer access to the results of funded research has always been a bi-partisan issue,and that governmental public access programs are integrated into policies and procedures at this point, and private funders like the Gates Foundation have asserted the importance of public access.  That said, for those who want to follow the developments, here are a few recent posts:

David Wojick, a part time Senior Consultant for Innovation at OSTI, the Office of Scientific and Technical Information, in the Office of Science of the US Department of Energy, has a useful blog called insidepublicaccess . Wojick’s blog has recently featured posts on tracking the movements of the Trump Administration as it pertains to open access to and support of science and technology.

It is difficult to determine the longer term impact of statements and actions because they change so rapidly. Daily changes surrounding these and other federal issues has created a sense of chaos.  However, following is a summary of recent ones related to public access, in an attempt to acknowledge the progression of actions and statements.

In November 2016, James Carafano (Heritage Foundation), was identified as a member of the “landing team” for the Department of Homeland Security. Carafano was the lead author of a Heritage Foundation report released during summer entitled “Science Policy: Priorities and Reforms for the 45th President.” While the report covers many issues surrounding science policy reform, one of the report’s strongest recommendations is the elimination of the White House Office of Science and Technology Policy(OSTP). The OSTP has been a major source for federal funding for science and technology research.

In December 2016, the head of the Department of Energy transition team was replaced.  The DOE has been a leading developing the Public Access Program, which requires scholarship funded by federal grants to be free and open to the public who pay taxes to support these kinds of grants. The future of the Public Access Program will be determined by the heads of Federals Agencies, some of which have yet to be defined.

Late January 2017, The Office of Science and Technology Policy website was removed. It is now archived on the Obama Administration’s website. 

Late January 2017, Wojick writes via the Open Scholarship Initiative (OSI) listserv that as of now, the OSTP will remain and notes that science has typically had bipartisan support, and may still do well under the Trump Administration.

Late January 2017, there were “reports of the Trump administration’s attempts to order media blackouts of federal agencies.” The American Library Association’s Office of Intellectual Freedom posted a statement condemning government agency censorship.

January 29, 2017, Ars Technica published an article noting the chaotic and confusing start to the Trump Administration’s actions surrounding support for science and technology research.  The article points out that decisions made one day have been redacted the next, thereby creating confusion and uncertainty. Of notable concern right now are vanishing webpages, which are now archived on Obama Administration website.

Much work has already been done to create frameworks for new Federal Agency heads to follow as they make decision about open access to research and scholarship.  One of these frameworks is the Federal Agency Open Licensing Playbook

It is important to keep the fundamental principles of public access to funded research in mind! 

Please contact us in the Scholarly Communication, Copyright and Publishing Program with questions! 

Barbara DeFelice and Jen Green

Posted in Uncategorized | Tagged , , , , | Leave a comment