Twitter as part of your professional identity

Free images from CC0 Public Domain. Free for commercial use. No attribution required

I have to admit that I wasn’t much of a Twitter user until recently. I created an account in 2009–probably a year or so after I started using Facebook. I think I posted three things and then my Twitter profile lay dormant for about eight years. At the time, due diligence is what motivated me to join Twitter. I’d just been to a national library conference where I’d listened to a panel of librarians talk about “meeting patrons where they are” whether that be in the library or Twitter. Getting a Twitter account seemed like the right thing to do, and I would figure out how it might be useful to me over time…eight years of time to be exact.

My time to rediscover Twitter began a little over a year ago when I stepped into the role of Digital Scholarship Librarian at Dartmouth. I became involved in conversations with authors about how to measure the impact of their scholarship. One way that an author’s  impact has been traditionally measured is with an h-index. H-index ratings are determined when certain databases capture information about the number of times an article within that database is cited. This results in a calculation that intends to say something about the impact the article has had on the scholarly community. The problem with this method is that the article citation must live in (be indexed by) the database, and depending on the author’s discipline or type of published work (e.g., books) the work might not be indexed by that database and therefore remains unmeasurable by an h-index.  For some authors, the h-index is not an accurate depiction of the importance or the impact of their work. Alternative ways of measuring scholarly impact have emerged through altmetrics. Altmetrics allow us to measure scholarly impact through non-traditional sources such as social media and news media mentions. An example of a tool that does this is Plum Analytics, which tells the story of scholarship and its impact through a variety of journal, social media, and news media lenses. This is where Twitter becomes relevant.

So, what if you’re not a fan of Twitter? Do you need to be on Twitter for tools like Plum Analytics to capture mentions? No, people can and will talk about your work whether you are there or not, which may be a good reason to be on Twitter. Like a water cooler, Twitter is a place to gather, and when people gather they talk about (among many other things) tv, news, art, and scholarship. Tools like Plum Analytics can show up at that water cooler and capture mentions of an author’s work on social media (e.g., Twitter and Facebook), produce data about those mentions and share that with an author who might not be present at the cooler. This is useful information, especially if your work is not represented well to your peers and administrators through the h-index. And, if you’re like me, you probably didn’t know that people are talking about your work on Twitter, but once you do know, your enthusiasm for Twitter re-emerges. Over the past year, I’ve discovered that many faculty are like me in that when they discover that people are talking about their work on social media, they are surprised and delighted. Sometimes they are more excited about that than when it’s been cited, more traditionally, in someone else’s publication. This may be because on Twitter, people don’t HAVE to mention your work; they do it because it made impact on them and they are inspired to share it with others. As creators, this is exciting to discover.

When we talk with scholars about their professional identity, we address the importance of establishing a professional website that highlights professional or scholarly achievements and areas of expertise. We want them to know that if they don’t tell their own story online, someone else may do it for them, which can be inaccurate and harmful. We also talk about the importance of tools such as ORCID that will help establish them as a unique author, even if their name is John Smith. And then, we talk about the important role that social media plays in one’s identity (whether or not they are present in those social spheres).

Posted in Uncategorized | Tagged , , , , , | Leave a comment and Your Professional Identity is a very popular platform among researchers for advancing their reputations and often for sharing copies of their papers, whether allowed by publishers or not. It is the product of a start-up company that has received financing from some of the main funders of ways to provide open access to the results of funded research such as the Wellcome Trust, as recently described in this article in TechCrunch.

It is widely used by people who want to be known for their scholarship and research.  Frequent updates via email about people using or wanting your papers, or new papers by your co-authors of past papers, are either welcome or annoying, depending on your viewpoint at the time! 

The system prompts researchers to upload the PDF of articles, and frequently authors are not aware of that publishers forbid such sharing via the author contract. However, many do, and is a major source of full text found via GoogleScholar.

Dr. Hamid R. Jamali did a study about full text found in, titled “Copyright compliance and infringement in ResearchGate full-text journal articles”, which was published by Springer in Scientometrics in February 2017, and is available for a fee. The author’s version of this article is posted in, which is allowed by the author’s contract with Springer.  He found that 51.3% of the articles in the study should not have been posted on based on publishers’ policies. 

We discuss appropriate use of a variety of tools, such as, for advancing your professional identity in our workshops on managing your professional or research identity, and are always happy to consult on questions regarding use of services like 

Barbara DeFelice and Jen Green

Scholarly Communication, Copyright and Publishing Program at Dartmouth

Posted in Uncategorized | Tagged , , , , | Leave a comment

Open Data Day

Image provided by

SPARC (the Scholarly Publishing and Academic Resources Coalition) released their member update this morning and shared the news that they have officially joined the Data Coalition, an organization that “advocates on behalf of the private sector and the public interest for the publication of government information as standardized, machine-readable data.” The Data Coalition also organizes and supports data advocacy events, such as DataRefuge, which are popping up all around the country, including New England (e.g., DataRescue Boston at MIT on February 18). Although librarians and archivists have worked diligently across many Presidents’ Administrations to protect and ensure access to government data, last week’s Fair Use Week discussions pointed out that their concern over government data is more pronounced right now. DataRefuge is a project originally scoped to rescue climate and environmental data, but librarians via the Libraries Network have recently connected to include protection and preservation of more types of born-digital government data. These events and activities are in support of the DATA Act and the OPEN Government Data Act, established and passed in 2014 and 2015 respectively.  

The current prominence of DataRefuge and the news that SPARC has joined Data Coalition is timely since this Saturday, March 4th, is International Open Data Day. This is a day dedicated to engaging researchers, students, and fellow librarians in creating, using, and reusing Open Data. One of SPARC’s first actions as Data Coalition members will be to co-host two events during Open Data Day. The first is a two-day, global Open Data ‘do-a-thon’, which will be co-hosted with the U.S. National Institutes of Health. This event is based around a face-to-face gathering in London, but you can join remotely to participate in discussions before, during, and after the event. Later in Washington DC, SPARC, along with the Sunlight Foundation and Center for Open Data Enterprise, will co-host afternoon discussions with leaders from government and civil society about Open Data.  You are welcome to join in those discussions.

SPARC will likely post Open Data Day activities on their Twitter feed, and they are an organization that I’d recommend following if you have an interest in open access, open educational resource, and open data. You can follow on SPARC @SPARC_NA. Dartmouth is proud to be a SPARC member.


Posted in Uncategorized | Tagged , , , , | Leave a comment

Fair Use Week 2017


Modified from the Fair Use Week Infographic,

Last week was brought to us by the “Love Your Data Week” celebration, and we learned through a series of posts how to better provide for the care and feeding of our data. This week is brought to us by the “Fair Use/Fair Dealing Week” celebration, where we revisit the significance of fair use in research, learning, work, and life.

What is fair use and how does it impact you? 

Fair use is a legal exception within copyright law that allows a person to use copyrighted materials without permission for specific circumstances.  You or I can make a fair use determination at any day or time, but the factors that must be considered are the same as if a judge were making a determination in a court of law.  These factors are:

  1. the purpose and character of the use
  2. the nature of the copyrighted work
  3. the amount and substantiality of the portion taken, and
  4. the effect of the use upon the potential market

Stanford has a great resource to review the definitions and details of fair use:

Academics often think of fair use within the context of teaching, research, and scholarship in that fair use allows them to use portions of copyrighted materials for the purpose of teaching concepts within their classrooms and incorporating critical works of others that help support new innovations, creativity, and ideas. But, there are other circumstances where fair use applies within everyday life. Circumstances that enrich us culturally, intellectually, socially, and personally. Some examples of ways that copyrighted content might be used under the umbrella of fair use are:

  • reporting the news
  • making fun of the news through parody
  • making art from someone else’s art
  • reproducing a book in large print or braille

One infographic from helps illustrate this point:

Fair Use Examples

Fair Use Week Website at

..and here is what fair use looks like in a day in the life of a college student…

Fair use in a day in the life of a college student

Fair use in a day in the life of a college student

If you are new to the concept of fair use, it can be complicated to understand and difficult to determine whether fair use applies to your specific need.  Barbara and I teach multiple workshops through the academic year on fair use.  We also visit individually with students, faculty, and staff to help them make a fair use determination.  If you have questions about fair use or other copyright issues, please don’t hesitate to contact us!

Posted in Uncategorized | Tagged , , , | 1 Comment

Love Your Data Week Feb 13th – 17th 2017

February 17th: Rescuing Unloved Data

Post authored by Lora Leligdon


We are wrapping up Love Your Data week with rescuing unloved data.

As always, please join in the conversation on Twitter (#LYD17 #loveyourdata) or share your insights on Facebook (#LYD17 #loveyourdata).

And while today is the last day of our event, there is still time to register for workshops on data management at Dartmouth. Starting on February 20th, the library will host six data management workshops exploring different stages of the research data life cycle, including data management planning, cleaning, visualizing, storing, sharing, and preserving. Please visit for more information and to register to attend.

Our daily blog posts are courtesy of the 2017 LYD Week Planning Committee. Learn more at!

“Data that is mobile, visible and well-loved stands a better chance of surviving” ~ Kurt Bollacker

Things to consider:

Legacy, heritage and at-risk data share one common theme: barrier to access. Data that has been recorded by hand (field notes, lab notebooks, handwritten transcripts, measurements or ledgers) or on outdated technology or using proprietary formats are at risk.

Securing legacy data takes time, resources, and expertise but is well worth the effort as old data can enable new research and the loss of data could impede future research. So how to approach reviving legacy or at-risk data?

How do you eat an elephant? One bite at a time.

  1. Recover and inventory the data
    • Format, type
    • Accompanying material–codebooks, notes, marginalia
  2. Organize the data
    • Depending on discipline/subject: date, variable, content/subject
  3. Assess the data
    • Are there any gaps or missing information?
    • Triage–consider nature of data along with ease of recovery
  4. Describe the data
    • Assign metadata at the collection/file level
  5. Digitize/normalize the data:
    • Digitization is not preservation. Choose a file format that will retain its functionality (and accessibility!) over time: “Which file formats should I use?”
  6. Review
    • Confirm there are no gaps or indicate where gaps exist
  7. Deposit and disseminate
    • Make the data open and available for re-use



That’s a wrap for our Love Your Data week posts on data quality! Thanks for reading along, and we hope you’ve learned to love your data. 

If you have any questions on data management, please contact Lora Leligdon.

Posted in Uncategorized | Tagged , , , , | Leave a comment

Love Your Data Week Feb 13th – 17th 2017

February 16th: Finding the Right Data

Post authored by Lora Leligdon


Thursday bring us to finding the right data for your project.

Need help finding the right data? Check out the Library’s Research Guides on data or contact your subject librarian for personal assistance.

To find the right data, have a clear question and locate quality data sources.

Things to consider

In a 2004 Science Daily News article, the National Science Foundation used the phrase “here there be data” to highlight the exploratory nature of traversing the “untamed” scientific data landscape. The use of that phrase harkens to older maps of the world where unexplored territories or areas on maps bore the warning ‘here, there be [insert mythical/fantastical creatures]’ to alert explorers of the dangers of the unknown. While the research data landscape is (slightly) less foreboding, there’s still an adventurous quality to looking for research data.



  1. Formulate a question

The data you find is only as good as the question you ask. Think of the age-old “who, what, where, when” criteria when putting together a question – specifying these elements helps to narrow the map of data available and can help direct where to look!

  • WHO (population)
  • WHAT (subject, discipline)
  • WHERE (location, place)
  • WHEN (longitudinal, snapshot)

This page from Michigan State University Libraries’ “How to find data & statistics” guide does a great job of further articulating these key elements to forming a question and putting together a data search strategy.

  1. Locate data source(s)

After you’ve identified the question, you can begin the scavenger hunt that is locating relevant source(s) of research data. One way to find data is to think about what organization, industry, discipline, etc. might gather and/or disseminate data relevant to your question.

Thinking about your source can also help with evaluating whether or not you have relevant, quality data to use.

  • There is an increasing number of city or statewide data portals – some examples: New York City, Hawaii, and Illinois – that provide access to regional data on everything from traffic patterns to restaurant inspection results.

Check out this post from Nathan Yau, data viz whiz and creator of FlowingData — his post includes some of the sources listed above, but also highlights tips like scraping data from websites and using APIs to access data.

  1. Cite accordingly 

The ability to reuse data is only as good as its quality; the ability to find relevant data is only possible if it’s discoverable. As a producer of data, that means following many of the practices articulated in earlier posts. As a consumer of data, that means being a good researcher and citing your data sources.

In general, citing data follows the same template as any other citation — include information such as author, title, year of publication, edition/version, and persistent identifier (e.g., Digital Object Identifier, Uniform Resource Name). Check with your data source as well – they may provide guidance on how they want to be cited!

See DataONE and ICPSR pages on data citation for examples and more guidance.


BYODM — build your own (research) data map! Ask yourself:

  • What data sources are most relevant to my research?
  • Are there relevant data sets generated or held locally that I have access to?
  • What information do I need to retrace my steps back to these data (e.g., contact information, URLs, etc.)?

Where have you found the right data? Join us on Twitter or Facebook (#LYD17 #loveyourdata) to share your stories! Our daily blog posts are courtesy of the 2017 LYD Week Planning Committee. Learn more at!

Tomorrow we are going to wrap up the week with rescuing unloved data.

Posted in Uncategorized | Tagged , , , | Leave a comment

Love Your Data Week Feb 13th – 17th 2017

February 15th: Good Data Examples

Post authored by Lora Leligdon


Day three of Love Your Data week brings us to some examples of good data! What are good data?

Good data are FAIR – Findable, Accessible, Interoperable, Re-usable

Things to consider:

What makes data good?

  • Data has to be readable and well-documented enough for others (and a future you) to understand.
  • Data has to be findable to keep it from being lost. Information scientists have started to call such data FAIR — Findable, Accessible, Interoperable, Re-usable. One of the most important things you can do to keep your data FAIR is to deposit it in a trusted digital repository. Do not use your personal website as your data archive.
  • Tidy data are good data. Messy data are hard to work with.
  • Data quality is a process, starting with planning through to curation of the data for deposit.


Example: This dataset is still around and usable more than 50 years after the data were collected and more than 40 years after it was last used in a publication.

Counterexample: This article: promises:

“Statistical scripts and the raw dataset are included as supplemental data and are also available at”



(Used by recommendation of the author who has long since become enlightened. The data have made it into a trusted repository too.)

Hadley Wickham tells you how to tidy your data:


Example: Data can take many forms. This compilation of “Morale and Intelligence Reports” collected by the UK Government during and after the war is a great example of qualitative historical data:


  • Want to learn more? Register and attend a Dartmouth research data management workshop to learn more about planning, cleaning, visualizing, storing, sharing, and preserving your data at Dartmouth.
  • What is your favorite data set? How/why is it good for your project? Try out the FAIR Principles to describe and share examples of good data for your discipline. Tell us on Twitter or Facebook (#LYD 2017 #loveyourdata)

Our daily blog posts are courtesy of the 2017 LYD Week Planning Committee. Learn more at!

We’re getting close to the end of our quality data posts! But stay tuned – tomorrow we will be discussing how to find the right data for your project.

Posted in Uncategorized | Tagged , , , | Leave a comment

Love Your Data Week Feb 13th – 17th 2017

February 14th: Documenting, Describing, Defining

Post authored by Lora Leligdon


For the second day of Love Your Data week, we will be discussing good data documentation!

Good documentation tells people they can trust your data by enabling validation, replication, and reuse.

Things to consider:

Why does having good documentation matter?

  • It contributes to the quality and usefulness of your research and the data itself – for yourself, colleagues, students, and others.
  • It makes the analysis and write-up stages of your project easier and less stressful.
  • It helps your teammates, colleagues, and students understand and build on your work.
  • It helps to build trust in your research by allowing others to validate your data or methods.
  • It can help you answer questions about your work during pre-publication peer review and after publication.
  • It can make it easier for others to replicate or reuse your data. When they cite the data, you get credit! Include these citations in your CV, funding proposal, or promotion and tenure package.
  • It improves the integrity of the scholarly record by providing a more complete picture of how your research was conducted. This promotes public trust and support of research!
  • Some communities and fields have been talking about documentation for decades and have well-developed standards for documentation (e.g., geospatial data, clinical data, etc.), while others do not (e.g., psychology, education, engineering, etc.). No matter where your research community or field falls in this spectrum, you can start improving your documentation today!

Stories (learn from others’ mistakes and successes)


Practical Tips by data type & format

General Resources


  • Want to learn more? Attend the upcoming Dartmouth workshops on data management to learn hands-on approaches to ensuring quality data.
  • Check out some of the documentation guidelines and standards out there. What can you borrow or learn from them to improve your own documentation?
  • Join the conversation on Twitter at (#LYD17 #loveyourdata) or share your insights on Facebook (#LYD17 #loveyourdata)

Stay tuned… tomorrow we will be providing good data examples!

Our daily blog posts are courtesy of the 2017 LYD Week Planning Committee. Learn more at!

Posted in Uncategorized | Tagged , , | Leave a comment

Love Your Data Week Feb 13th – 17th 2017

February 13th: Defining Data Quality

Post authored by Lora Leligdon


Welcome to Love Your Data week! Each day this week we will be blogging, tweeting, and sharing practical tips, resources, and stories to help you adopt good data practices. Up first, know your data quality!

Data quality is the degree to which data meets the purposes and requirements of its use. Depending on the uses, good quality data may refer to complete, accurate, credible, consistent or “good enough” data.

Things to consider:

What is data quality and how can we distinguish between good and bad data? How are the issues of data quality being addressed in various disciplines?

  • Data quality refers to the quality of content (values) in one’s data set. For example, if a data set contains names and addresses of customers, all names and addresses have to be recorded (data is complete) and correspond to the actual names and addresses (data is accurate), and all records have to be up-to-date (data is current).
  • The most common characteristics of data quality include completeness, validity, consistency, timeliness, and accuracy. Additionally, data has to be useful (fit for purpose), documented, and reproducible/verifiable.
  • At least four activities impact the quality of data: modeling the world (deciding what to collect and how), collecting or generating data, storage/access, and formatting/transformation.
  • Assessing data quality requires disciplinary knowledge and is time-consuming.
  • Data quality issues: how to measure, how to track lineage of data (provenance), when data is “good enough”, what happens when data is mixed and triangulated (esp. high quality and low quality data), and crowdsourcing for quality.
  • Data quality is responsibility of both data providers and data curators: data providers ensure the quality of their individual data sets, while curators help the community with consistency, coverage, and metadata.

“Care and Quality are internal and external aspects of the same thing. A person who sees Quality and feels it as he works is a person who cares. A person who cares about what he sees and does is a person who’s bound to have some characteristic of quality.”

― Robert M. Pirsig, Zen and the Art of Motorcycle Maintenance: An Inquiry Into Values




  • Want to learn more? Attend the upcoming Dartmouth workshops “Data Management Planning with the DMPTool” and “Data Cleaning with OpenRefine and R” to learn hands-on approaches to ensuring quality data.
  • Use criteria for good data (e.g., completeness, accuracy, fitness for use, documentation) to assess where your data stands.
  • Discuss your approaches to data collection and measures you took/could take to ensure integrity and completeness of your data.
  • Discuss steps to address missing or incomplete data in the context of your research. Does it matter? How much missing data affects validity, reliability or trustworthiness of your conclusions?

Remember to join our conversation on Twitter (#LYD17 #loveyourdata) or share your insights on Facebook (#LYD17 #loveyourdata). Up tomorrow…. Documenting, Describing, and Defining your data.

Our daily blog posts are courtesy of the 2017 LYD Week Planning Committee. Learn more at!

Posted in Uncategorized | Tagged , , | Leave a comment

Love Your Data Week Feb 13th – 17th 2017

February 9th: Love Your Data!

Post authored by Lora Leligdon


Next week is Love Your Data week, an international event to help researchers take better care of their data. This year’s theme is emphasizing data quality for researchers during any stage in their career.  

Similar to Open Access Week, the purpose of the Love Your Data (LYD) campaign is to raise awareness and build a community to engage with topics related to research data management, sharing, preservation, reuse, and library-based research data services. We believe research data are the foundation of the scholarly record and crucial for advancing our knowledge of the world around us. To celebrate, every day next week, we will blogging, tweeting, and sharing practical tips, resources, and stories to help you learn good data practices. 

Interested in learning more about research data management? The Library is pleased to announce a workshop series aimed at expanding your data best practices. Starting on February 20th, we will host six data management workshops exploring different stages of the research data life cycle, including data management planning, data cleaning, visualizing, storing, sharing, and preserving. Please visit for more information and to register to attend. Check out the Research Data Management Guide for information on DMPs, public access requirements, and more.

Please join our conversation on Twitter (#LYD17 #loveyourdata) or share your insights on Facebook (#LYD17 #loveyourdata).

Special thanks to the 2017 National LYD Week Planning Committee for organizing this week and sharing their amazing resources! Check out their work at!

Questions? Please contact Lora Leligdon or Jen Green for more information.

Posted in Uncategorized | Tagged , , , , | Leave a comment