Category Archives: Miscellaneous

Literatur (Stand: 29.07.2019)

  • Apostolou, M. (2007). Sexual selection under parental choice: The role of parents in the evolution of human mating. Evolution and Human Behavior, 28, 403–409.
  • Apostolou, M. (2007). Elements of parental choice: The evolution of parental preferences in relation to in-law selection. Evolutionary Psychology, 5, 70–83.
  • Apostolou, M. (2008). Parent-offspring conflict over mating: The case of family background. Evolutionary Psychology, 6, 456–468.
  • Apostolou, M. (2008). Parent-offspring conflict over mating: The case of beauty. Evolutionary Psychology, 6, 303–315.
  • Apostolou, M. (in press). Parental choice: What parents want in a son-in-law and a daughter-in-law across 67 pre-industrial societies. British Journal of Psychology. Baber, R. E. (1936). Some mate selection standards of college students and their parents. Journal of Social Hygiene, 22, 115–125.
  • Buunk, A. P., Park, J. H., & Dubbs, S. L. (2008). Parent-offspring conflict in mate preferences. Review of General Psychology, 12, 47–62.
  • Buunk et al., 2002: “Age and gender differences in mate selection criteria for various involvement levels”
  • Faulkner und Schaller, 2007: “Nepotistic nosiness: inclusive fitness and vigilance of kin members’ romantic relationship”
  • Fletcher, G. J. O., Simpson, J. A., Thomas, G., & Giles, L. (1999). Ideals in intimate relationships. Journal of Personality and Social Psychology, 76, 72-89.
  • Sprecher, 1992: “The Influence of Parents and Friends on the quality and stability of romantic relationships: a three-wave longitudinal investigation”
  • Trivers, 1974: “Parent-offspring conflict”
  • Perriloux et a., 2010: “Meet the parents: Parent-offspring convergence and divergence in mate preferences”
  • Lam et al., 2016: “What do you want in a marriage? Examining Marriage Ideals in Taiwan and the United States”
  • Gerlach et al., 2017: “Predictive Validity and Adjustment of Ideal Partner Preferences Across the Tranisition Into Romantic Relationships”
  • Campbell et al., 2016: “Initial Evidence that Individuals Form New Relationships with Partners that More Closely Match their Ideal Preferences”
Share

¿Qué tan grande es el BigData?

Equipo 2 “Marc Augé”

¿Qué tan grande es el BigData?

Según la revista  Metodology en el artículo escrito sobre el big data, esta se trata de la ciencia social computacional y este tiene tres características, la primera es que esto, implica una gran cantidad de datos, conjuntos de datos con una enorme magnitud que las bases de datos convencionales son incapaces de poder manejar lo que ha dado pie al segundo punto, a la necesidad de el desarrollo de  técnicas especializadas en el manejo de estos, el tercer punto que es muy interesante es la simulación basada en agentes, que dentro de las ciencias sociales resulta ser popular.

La simulación basada en agentes es una forma innovadora de explorar los fenómenos sociales, es un método de investigación que nos da la alternativa de tratar de manera sencilla la complejidad, la emergencia y la no-linealidad que es común en los fenómenos sociales, la creación de estas técnicas en especial las de re muestreo y las de validación cruzada suelen ser útiles haciendo más fácil para la procesamiento de datos para el investigador, ya que estos evalúan los datos en un análisis estadístico y garantizan que estos son independientes de la partición entre datos de entrenamiento y datos de prueba.

El articulo plantea una pregunta para aclarar el panorama sobre el concepto de big data, esta pregunta es ¿Qué tan grande es el big data? Incógnita que se respondió de una manera muy subjetiva, Jhon Tukey definió “Big Data” como algo que no cabe en un dispositivo, pienso que esto es muy subjetivo porque en el desarrollo de la tecnología hemos tenido muchos tipos de dispositivos, desde una cinta magnética en 1955 con una capacidad de 256 gigabytes hasta una USB de 2 Terabytes.

Aunque la medida de este no sea exacta o no tengamos palabras para poder describirlas podemos hacer un conteo del tamaño de los datos que se introducen, en el artículo ponen un ejemplo que dice “Los datos de rastreo de bucles de tráfico, también recopilados por Statistics Netherlands, producen 80 millones de registros por día. Un año de datos sería de aproximadamente 3 TB y solo cabría en un disco duro grande” esto nos da una idea del que tan grane es el big data.

Las ciencias sociales utilizamos el big data ya que cada vez la sociedad va dejando más rastro digital que después es analizada para hacer inferencias sobre estos comportamientos, el rastro digital en datos económicos que pueden ser mensajes de Facebook, twitter, listas de discusión en internet, los teléfonos móviles, la ubicación, llamadas, etc. Todos estos datos son recopilados y puestos para ser analizados.

Hox, J. J. (2017). Computational Social Science. Methodology.

 

Donoho, D. (2015). 50 years of data science. Retrieved from

 

How big is the BigData?

According to the journal Metodology in the article written about big data, this is about social computational science and this has three characteristics, the first is that this implies a large amount of data, data sets with a huge magnitude that the bases conventional data are unable to handle the second point, the need for the development of specialized techniques in the management of these, the third point that is very omportant is the agent-based simulation, which within the Social sciences turns out to be popular.

The agent-based simulation is an innovative way to explore social phenomena, it is a research method that gives us the alternative of dealing in a simple way with complexity, emergence and non-linearity that is common in social phenomena. The creation of these techniques, especially those of re-sampling and those of cross-validation, are usually useful in making data processing easier for the researcher, since they evaluate the data in a statistical analysis and guarantee that they are independent of the partition between data of training and test data.

The article asks a question to clarify the big data concept, this question is: How big is big data? Incognita that was answered in a very subjective way, John Tukey defined “Big Data” as something that does not fit in a device. I think this is very subjective because in the development of technology we have had many types of devices, from a magnetic tape in 1955 with a capacity of 256 gigabytes to a USB of 2 Terabytes.

Although the measure of this is not exact or we do not have words to describe them we can do a count of the size of the data that is introduced, in the article they give an example that says “The data of tracking of traffic loops, also compiled by Statistics Netherlands, they produce 80 million records per day. One year of data would be about 3 TB and would only fit on a large hard drive “This gives us an idea of ​​how big big data is.

The social sciences use big data as every time society leaves more digital trail that is later analyzed to make inferences about these behaviors, the digital trail in economic data that can be Facebook messages, twitter, discussion lists on the internet, mobile phones, location, calls, etc. All these data are collected and put to be analyzed.

 

Hox, J. J. (2017). Computational Social Science. Methodology.

 

Donoho, D. (2015). 50 years of data science. Retrieved from

 

Share

Artículo sobre el efecto nocebo

Ante todo quiero agradecer al Open Notebook Science Network la oportunidad de crear este cuaderno abierto de laboratorio. En él iré escribiendo sobre el progreso de mi investigación, así como otros aspectos de la misma.

Actualmente me hallo inmerso en el proceso de escritura del que será mi primer articulo científico, basado en mi trabajo final de grado, sobre el efeto nocebo algésico. Quiero asociar este efecto al daño iatrogénico provocado por el diagnóstico y tratamiento psiquiátrico, en particular el dolor emocional causado por la desesperanza y el estigma tan a menudo asociados a este, y cómo este puede no solo prevenir la recuperación sino influir negativamente en la simptomatología.

Share

qPCRs – Ronit’s C.gigas ploidy/dessication/heat stress cDNA (1:5 dilution)

IMPORTANT: The cDNA used for the qPCRs described below was a 1:5 dilution of Ronit’s cDNA made 20181017 with the following primers! Diluted cDNA was stored in his -20oC box with his original cDNA.

The following primers were used:

18s

  • Cg_18s_F (SR ID: 1408)

  • Cg_18s_R (SR ID: 1409)

EF1 (elongation factor 1)

  • EF1_qPCR_5′ (SR ID: 309)
  • EF1_qPCR_3′ (SR ID: 308)

HSC70 (heat shock cognate 70)

  • Cg_hsc70_F (SR ID: 1396)
  • Cg_hsc70_R2 (SR ID: 1416)

HSP90 (heat shock protein 90)

  • Cg_Hsp90_F (SR ID: 1532)
  • Cg_Hsp90_R (SR ID: 1533)

DNMT1 (DNA methyltransferase 1)

  • Cg_DNMT1_F (SR ID: 1511)
  • Cg_DNMT1_R (SR ID: 1510)

Prx6 (peroxiredoxin 6)

  • Cg_Prx6_F (SR ID: 1381)
  • Cg_Prx6_R (SR ID: 1382)

Samples were run on Roberts Lab CFX Connect (BioRad). All samples were run in duplicate. See qPCR Report (Results section) for plate layout, cycling params, etc.

qPCR master mix calcs (Google Sheet):


RESULTS

No analysis here. Will analyze data and post in different notebook entry. This entry just contains the qPCR setup, resulting data, and a glimpse of how each primer performed.

Nothing is broken down based on sample ploidy or experimental conditions.

18s

qPCR Report (PDF):

qPCR File (PCRD):

qPCR Data (CSV):

Amplication Plots

Melt Curves


DNMT1

qPCR Report (PDF):

qPCR File (PCRD):

qPCR Data (CSV):

Amplication Plots

Melt Curves


EF1

qPCR Report (PDF):

qPCR File (PCRD):

qPCR Data (CSV):

Amplication Plots – Manual Threshold (Linear)

Amplication Plots – Manual Threshold (Log)

Amplication Plots – Automatic Threshold (Linear)

Amplication Plots – Automatic Threshold (Log)

Melt Curves


HSC70

qPCR Report (PDF):

qPCR File (PCRD):

qPCR Data (CSV):

Amplication Plots

Melt Curves


HSP90

qPCR Report (PDF):

qPCR File (PCRD):

qPCR Data (CSV):

Amplication Plots

Melt Curves


Prx6

qPCR Report (PDF):

qPCR File (PCRD):

qPCR Data (CSV):

Amplication Plots

Melt Curves

Share

Reverse Transcription – Ronit’s C.gigas DNased ctenidia RNA

Proceeded with reverse transcription of Ronit’s DNased ctenidia RNA (from 20181016).

Reverse transcription was performed using 100ng of each sample with M-MMLV Reverse Transcriptase from Promega.

Briefly, 100ng of DNased RNA was combined with oligo dT primers and brought up to a final volume of 15uL. Tubes were incubated for 5mins at 70oC in a PTC-200 thermal cycler (MJ Research), using a heated lid. Samples were immediately placed on ice.

A master mix of buffer, dNTPs, water, and M-MMLV reverse transcriptase was made, 10uL of the master mix was added to each sample, and mixed via finger flicking. Samples were incubated for 1hr at 42oC in a PTC-200 thermal cycler (MJ Research), using a heated lid, followed by a 5min incubation at 65oC.

Samples were stored on ice for use later this afternoon by Ronit.

Samples will be stored in Ronit’s -20oC box.

Reverse transcription calcs (Google Sheet):

Share